I have built an environment inside docker compose in order to run robot tests. The environment consists of django web app, postgres and robot framework container. The Problem I have is that I get many blank screens in different tests, while using external Django web app instance which is installed on a virtual machine doesn't have this problem.
The blank screen causes that elements are not found hence so many failures:
JavascriptException: Message: javascript error: Cannot read property 'get' of undefined
(Session info: headless chrome=84.0.4147.89)
I am sure that the problem is with the Django app container itself not robot container since as said above I have tested with the same environment but against different web app which is installed outside Docker, and it worked.
docker-compose.yml:
version: "3.6"
services:
redis:
image: redis:3.2
ports:
- 6379
networks:
local:
ipv4_address: 10.0.0.20
smtpd:
image: mysmtpd:1.0.5
ports:
- 25
networks:
- local
postgres:
image: mypostgres
build:
context: ../dias-postgres/
args:
VERSION: ${POSTGRES_TAG:-12}
hostname: "postgres"
environment:
POSTGRES_DB: ${POSTGRES_USER}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
networks:
local:
ipv4_address: 10.0.0.100
ports:
- 5432
volumes:
- my-postgres:/var/lib/postgresql/data
app:
image: mypyenv:${PYENV_TAG:-1.1}
tty: true
stdin_open: true
user: ${MY_USER:-jenkins}
networks:
local:
ipv4_address: 10.0.0.50
hostname: "app"
ports:
- 8000
volumes:
- ${WORKSPACE}:/app
environment:
ALLOW_HOST: "*"
PGHOST: postgres
PGUSER: ${POSTGRES_USER}
PGDATABASE: ${POSTGRES_USER}
PGPASSWORD: ${POSTGRES_PASSWORD}
ANONYMIZE: "false"
REDIS_HOST: redis
REDIS_DB: 2
APP_PATH: ${APP_PATH}
APP: ${MANDANT}
TIMER: ${TIMER:-20}
EMAIL_BACKEND: "dias.core.log.mail.SmtpEmailBackend"
EMAIL_HOST: "smtpd"
EMAIL_PORT: "25"
robot:
image: myrobot:${ROBOT_TAG:-1.0.9}
user: ${ROBOT_USER:-jenkins}
networks:
local:
ipv4_address: 10.0.0.70
volumes:
- ${WORKSPACE}:/app
- ${ROBOT_REPORTS_PATH}:/APP_Robot_Reports
environment:
APP_ROBOT: ${APP_ROBOT}
TIMER: ${TIMER:-20}
PGHOST: postgres
PGUSER: ${POSTGRES_USER}
PGDATABASE: ${POSTGRES_USER}
PGPASSWORD: ${POSTGRES_PASSWORD}
THREADS: ${THREADS:-4}
tty: true
stdin_open: true
entrypoint: start-robot
networks:
local:
driver: bridge
ipam:
config:
- subnet: 10.0.0.0/24
volumes:
my-postgres:
external: true
name: my-postgres
I have monitored the app stats and nothing is abnormal during testing. Also, manually tested the app in browser and it looks just good with nothing wrong about it.
Note: There is no mismatch between chromedriver and google chrome version (anyway this doesn't matter since the same robot container has worked with other instance where no Docker is used for the Django app)
Anyone has an idea ?
I didn't focus before that I run pabot with 8 processes while django app was started with 2 celery workers. As soon as I increased celery workers to 4 it worked. Not sure though if this is the actually cause but it made sense to me as well as it worked.
celery -A server -c ${CELERY_CONCURRENCY:-2} worker
Related
I am new to both Docker and Selenium grid and am having issues getting my web app to connect to the selenium hub.
docker-compose.yml
version: '3.8'
services:
db:
image: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- "${POSTGRES_PORT}:5432"
volumes:
- pgdata:/var/lib/postgresql/data
web:
build:
context: ..
dockerfile: docker/Dockerfile
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_CONFIG: ${FLASK_CONFIG}
APPLICATION_DB: ${APPLICATION_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_HOSTNAME: "db"
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PORT: ${POSTGRES_PORT}
command: flask run --host 0.0.0.0
volumes:
- ..:/opt/code
ports:
- "5000:5000"
chrome:
image: selenium/node-chrome:4.0.0-20211013
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6900:5900"
edge:
image: selenium/node-edge:4.0.0-20211013
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6901:5900"
firefox:
image: selenium/node-firefox:4.0.0-20211013
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6902:5900"
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
volumes:
pgdata:
When running the stack of containers and checking netstat -a, I can see my desktop listening to port 4444 and when I kill the containers its not.
I can also verify that the hub is running and all of my nodes are connecting fine by visiting https//:localhost/4444, however when I run driver = webdriver.Remote(command_executor="http://localhost:4444") from my python flask app (which is running in the container specified as web above) I get the error:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=4444): Max retries
exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection
object at 0x7fd4855b7730>: Failed to establish a new connection: [Errno 111] Connection refused'))
I have tried specifying the desired capabilities to match the driver for specific nodes, however I receive this same error regardless.
I am using Selenium's latest build "4.0.0" and as you can see 4.0.0 images for the parts of the grid so I don't think its a compatibility issue.
docker ps
Name Command State Ports
------------------------------------------------------------------------------------------------------------------------
development_chrome_1 /opt/bin/entry_point.sh Up 0.0.0.0:6900->5900/tcp,:::6900->5900/tcp
development_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp,:::5432->5432/tcp
development_edge_1 /opt/bin/entry_point.sh Up 0.0.0.0:6901->5900/tcp,:::6901->5900/tcp
development_firefox_1 /opt/bin/entry_point.sh Up 0.0.0.0:6902->5900/tcp,:::6902->5900/tcp
development_web_1 flask run --host 0.0.0.0 Up 0.0.0.0:5000->5000/tcp,:::5000->5000/tcp
selenium-hub /opt/bin/entry_point.sh Up 0.0.0.0:4442->4442/tcp,:::4442->4442/tcp,
0.0.0.0:4443->4443/tcp,:::4443->4443/tcp,
0.0.0.0:4444->4444/tcp,:::4444->4444/tcp
I feel like I'm fundamentally missing something here, Any thoughts?
I see the mistake now. I was mistakenly attempting to connect to http://localhost:4444 with my client, when I was needing to specify the network name deployed by selenium grid.
Fix
Change this line in your flask_app.py
driver = webdriver.Remote(command_executor="http://localhost:4444")
To:
driver = webdriver.Remote(command_executor="http://container-name:4444")
Where container-name is the selenium hub name set in docker-compose.yml
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
in my case: "selenium-hub"
Resource on docker networking used: https://docs.docker.com/compose/networking/
Final Thoughts
I guess I got tripped up by the fact that I am still using http://localhost:port to connect to both the grid hub and web container. I guess the difference is where the client request comes from? From outside the docker stack vs within? Anyway, hope this helps someone.
I have several micro-services running as docker containers. All web services work fine and route correctly.
The only issue is the websocket service.
The websocket service itself is using python websockets and has it's own TLS certificates.
Trying to access the websocket with wss://websocket.localhost fails, in the setup below it doesn't find the page at all.
In my previous configurations, it results in the Bad Gateway error.
Apparently traefik comes out of the box working with websockets with no additional configurations.
This doesn't seem to be the case. Any pointers?
The websocket connection works without docker or traefik involved, so I ruled that issue out.
Any help on this would be extremely appreciated.
docker-compose.yml
version: "3.7"
networks:
web:
external: true
internal:
external: false
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.localhost`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
traefik.yml
entryPoints:
insecure:
address: :80
http:
redirections:
entryPoint:
to: secure
scheme: https
secure:
address: :443
log:
level: INFO
accessLog:
filePath: "traefik-access.log"
bufferingSize: 100
api:
dashboard: true
insecure: true
ping: {}
providers:
file:
filename: /config/dynamic.yml # traefik dynamic configuration
watch: true # everytime it changes, it will be reloaded
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: true
config
tls:
stores:
default:
defaultCertificate:
certFile: cert.crt
keyFile: key.key
certificates:
- certFile: crt.crt
keyFile: key.key
stores:
- default
domains:
- main: "localhost"
while looking at your configuration, the following doesn't fit:
The docker-compose projectname will be part of the domain names. The default is to use the parent folder name of your docker-compose.yaml. You didn't specify it here, therefore I assume it to by traefik. You can set this explicitly in the docker-compose call with docker-compose -p traefik up or by setting the env variable PROJECT_NAME.
you are using the domain name '.localhost', but you don't define the domainname explicitly. That means the default name is used which is derived from the service name, the project name (the folder where is docker-compose file is stored), and the docker-network name that you attach to with this pattern: servicename.projectname_networkname.
Use the attributes hostname and domainname to explicitly define a name (only works for networks with internal=false).
When having two network connections and additionally a domainname definition you get the following domain names:
db.traefik_internal (only intern, db.localhost will not work)
dozzle.traefik_internal (only intern, dozzle.localhost will not work)
traefik.localhost
traefik.traefik_web
traefik.traefik_internal
websocket.localhost
websocket.traefik_web
websocket.traefik_internal
external=true just means that the network is created externally by docker network create or by another docker-compose project. The main effect is, that it is not delected when doing docker-compose down. It has nothing to do with the connection to the outside world.
To get an isolated internal network you have to use the option internal: true
the option condition: service_healthy is no longer supported for version: "3.7", so either remove that option (it nevertheless doesn't work like you expect) or change the version to 2.4
Here my current version of the docker-compose.yaml:
version: "2.4"
networks:
web:
internal:
internal: true
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
hostname: traefik
domainname: localhost
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
hostname: dozzle
domainname: localhost
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.traefik_internal`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
hostname: db
domainname: localhost
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
hostname: websocket
domainname: localhost
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared
Docker novice here.
I have committed new changes inside the application. These changes where copied from my local to host machine, and then to docker container.
So I created a new image sudo docker commit old_container_id new_image_name(djangotango-on-docker_web)
Then I spin the docker container by using new image created.
sudo docker run --name djangotango-web -d --expose 8000 djangotango-on-docker_web gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
Here djangotango-on-docker_web is my new image created.
But my application gives 502 error after this. My new container is not synced properly.
dockerfile
version: '3.8'
# networks:
# public_network:
# name: public_network
# driver: bridge
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:web
command: gunicorn djangotango.wsgi:application --bind 0.0.0.0:8000
volumes:
# - .:/home/app/web/
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
env_file:
- ./.env.staging
networks:
service_network:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.staging.db
networks:
service_network:
# depends_on:
# - web
# pgadmin:
# image: dpage/pgadmin4
# env_file:
# - ./.env.staging.db
# ports:
# - "8080:80"
# volumes:
# - pgadmin-data:/var/lib/pgadmin
# depends_on:
# - db
# links:
# - "db:pgsql-server"
# environment:
# - PGADMIN_DEFAULT_EMAIL=4652173624824872
# - PGADMIN_DEFAULT_PASSWORD=exampleeee
# - PGADMIN_LISTEN_PORT=80
# networks:
# service_network:
nginx-proxy:
build: nginx
# image: <aws-account-id>.dkr.ecr.<aws-region>.amazonaws.com/django-ec2:nginx-proxy
restart: always
ports:
- 443:443
- 80:80
networks:
service_network:
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy"
depends_on:
- web
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
networks:
service_network:
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
networks:
service_network:
volumes:
postgres_data:
pgadmin-data:
static_volume:
media_volume:
certs:
html:
vhost:
How to do it in correct way? I'm running my production application on my domain name.
What I can understand from logs is, my web is not in same network as other container now.
I don't want to rebuild my docker-compose which will solve the problem but will increase the image size, plus it's not recommended I guess.
The correct approach here is to use only docker-compose commands, and to go ahead and rebuild your image:
docker-compose up --build --force-recreate web
Many of the options you'd need to recreate this with a plain docker run command are listed in the docker-compose.yml file, but some generated implicitly. The docker run command you show doesn't have a --net option to attach to the Compose network (which could result in the error you're getting), and it doesn't have the -v options to overwrite the image's static files with content from a volume or the settings from the .env.staging file.
You should almost never use docker commit either. What's the code change you made in your image, and how would your colleagues get and test that change? Especially with the mentions of "prod" here, running code in production that you haven't built from source and tested through your usual CI process is usually discouraged.
(In terms of image size, a committed image will always be larger than the original image; docker build a new image will start from the base image and generally be smaller. Committing images also tends to lose options like the default command to run.)
I've set my django project and now I'm trying to test it with pytest. What is issue running pytest withing my containers doesn't kill it at the end of the process. So at the end of day I'm stuck with multiple running containers from pytest and often postgreSql connection problems.
My docker-compose file:
version: '3'
services:
license_server:
build: .
command: bash -c "python manage.py migrate && gunicorn LicenseServer.wsgi --reload --bind 0.0.0.0:8000"
depends_on:
- postgres
volumes:
- .:/code
environment:
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
env_file: .env
ports:
- "8000:8000"
restart: always
postgres:
build: ./postgres
volumes:
- ./postgres/postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_PASSWORD: postgres
DATABASE_NAME: "${DATABASE_NAME}"
DATABASE_USER: "${DATABASE_USER}"
DATABASE_PASSWORD: "${DATABASE_PASSWORD}"
DATABASE_PORT: "${DATABASE_PORT}"
DATABASE_HOST: "${DATABASE_HOST}"
command: "-p 8005"
env_file: .env
ports:
- "127.0.0.1:8005:8005"
restart: always
nginx:
image: nginx:latest
container_name: nginx1
ports:
- "8001:80"
volumes:
- .:/code
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- license_server
What I want to achieve is automatically closing containers after the testing process is finished.
When you have restart: always they will just keep restarting when all the processes spawned by the command have exited. Even when you try to kill the running containers yourself they will tend to restart (which can be a nuisance). Try removing restart: always from your service descriptions.
For more info, check the docker-compose.yml reference