I am dockerizing a python script, and I run it as CMD ['python', 'script.py'], in the Dockerfile. When I up the container using docker-compose.yml, it runs,
But when I docker exec and go inside the container and do a ps -aux, I see the %CPU is 100%, because of this the purpose of the service is not met.
If I do the same process, i.e, by doing a docker exec and run the script python script.py manually in the container, It works good and I can see only a 5% of the CPU is utilized, as well as the service works and gives the expected result.
Service wrote in docker-compose:
consumer:
restart: always
image: consumer:latest
build: ./consumer
ports:
- "8283:8283"
depends_on:
- redis
environment:
- REDIS_HOST = redis
redis:
image: redis
command: redis-server
volumes:
- ./redis_data:/data
ports:
- "6379:6379"
restart: unless-stopped
It is a consumer application, which consumes the message from the producer and writes into a Redis server.
Can someone advice why such behavior is observed.
Related
I have a docker-compose file for a Django application.
Below is the structure of my docker-compose.yml
version: '3.8'
volumes:
pypi-server:
services:
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
pypi-server:
image: pypiserver/pypiserver:latest
ports:
- 8080:8080
volumes:
- type: volume
source: pypi-server
target: /data/packages
command: -P . -a . /data/packages
restart: always
db:
image: mysql:8
ports:
- 3306:3306
volumes:
- ~/apps/mysql:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=gary
- MYSQL_PASSWORD=tempgary
- MYSQL_USER=gary_user
- MYSQL_DATABASE=gary_db
nginx:
build: ./nginx
ports:
- 80:80
depends_on:
- backend
Django app is dependent on a couple of private packages hosted on the private-pypi-server without which the app won't run.
I created a separate dockerfile for django-backend alone which install packages of requirements.txt and the packages from private-pypi-server. But the dockerfile of django-backend service is running even before the private pypi server is running.
If I move the installation of private packages to docker-compose.yml command code under django-backend service in , then it works fine. Here the issue is that, if the backend is running and I want to run some commands in django-backend(./manage.py migrat) then it says that the private packages are not installed.
Im not sure how to proceed with this, it would be really helpful If i can get all these services running at once by just running the command docker-compose up --build -d
Created a separate docker-compose for pypi-server, which will be up and running even before I build/start other services.
Have you tried adding the pipy service to depends_on of the backend app?
backend:
command: "bash ./install-ppr_an_run_dphi.sh"
build:
context: ./backend
dockerfile: ./Dockerfile
volumes:
- ./backend:/usr/src/app
expose:
- 8000:8000
depends_on:
- db
- pypi-server
Your docker-compose file begs a few questions though.
Why to install custom packages to the backend service at a run time? I can see so many problems which might arise from this such as latency during service restarts, possibly different environments between runs of the same version of the backend service, any problems with the installation would come up during the deployment bring it down, etc. Installation should be done during the build of the docker image. Could you provide your Dockerfile maybe?
Is there any reason why the pypi server has to share docker-compose with the application? I'd suggest having it in a separate deployment especially if it is to be shared among other projects.
Is the pypi server supposed to be used for anything else than a source of the custom packages for the backend service? If not then I'd consider getting rid of it / using it for the builds only.
Is there any good reason why you want to have all the ports exposed? This creates a significant attack surface. E.g. an attacker could bypass the reverse proxy and talk directly to the backend service using port 8000 or they'd be able to connect to the db on the port 3306. Nb docker-compose creates subnetworks among the containers so they can access each other's ports even if those ports are not forwarded to the host machine.
Consider using docker secrets to store db credentials.
I have the following docker-compose file:
version: '3.1'
services:
postgres_db:
image: postgres
restart: always
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: default_db
ports:
- 54320:5432
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "test:1:1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After running it using docker-compose up - everything looks fine from the terminal output.
I start a python console and run the following lines:
kp = KafkaProducer(bootstrap_servers=['localhost:9092'],api_version=(0,10),
value_serializer=lambda x:
dumps(x).encode('utf-8'))
kc = KafkaConsumer('test', bootstrap_servers=['localhost:9092'],api_version=(0,10),group_id=None,auto_offset_reset='earliest',
value_deserializer=lambda json_data: json.loads(json_data.decode('utf-8')))
data = {"test":"test"}
kp.send(topic="test",value=data)
for message in kc:
print(message.value)
However after running this, the console simply hangs and it dosent look like the message was consumed/produced. Any ideas what went wrong here? Thanks!
Either you need to run your Python code in a container and set
bootstrap_servers=['kafka:9092']
Or you need to advertise Kafka back to the clients on your host machine
KAFKA_ADVERTISED_HOST_NAME: localhost
You can read the wurstmeister README on the usage of HOSTNAME_COMMAND as well
I'd also recommend running the producer and consumer separately as you test them
I have a dockerized setup running a Django app within which I use Celery tasks. Celery uses Redis as the broker.
Versioning:
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.15.0, build e12f3b9
Django==1.9.6
django-celery-beat==1.0.1
celery==4.1.0
celery[redis]
redis==2.10.5
Problem:
My celery workers appear to be unable to connect to the redis container located at localhost:6379. I am able to telnet into the redis server on the specified port. I am able to verify redis-server is running on the container.
When I manually connect to the Celery docker instance and attempt to create a worker using the command celery -A backend worker -l info I get the notice:
[2017-11-13 18:07:50,937: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
Trying again in 4.00 seconds...
Notes:
I am able to telnet in to the redis container on port 6379. On the redis container, redis-server is running.
Is there anything else that I'm missing? I've gone pretty far down the rabbit hole, but feel like I'm missing something really simple.
DOCKER CONFIG FILES:
docker-compose.common.yml here
docker-compose.dev.yml here
When you use docker-compose, you aren't going to be using localhost for inter-container communication, you would be using the compose-assigned hostname of the container. In this case, the hostname of your redis container is redis. The top level elements under services: are your default host names.
So for celery to connect to redis, you should try redis://redis:6379/0. Since the protocol and the service name are the same, I'll elaborate a little more: if you named your redis service "butter-pecan-redis" in your docker-compose, you would instead use redis://butter-pecan-redis:6379/0.
Also, docker-compose.dev.yml doesn't appear to have celery and redis on a common network, which might cause them not to be able to see each other. I believe they need to share at least one network in common to be able to resolve their respective host names.
Networking in docker-compose has an example in the first handful of paragraphs, with a docker-compose.yml to look at.
You may need to add the link and depends_on sections to your docker compose file, and then reference the containers by their hostname.
Updated docker-compose.yml:
version: '2.1'
services:
db:
image: postgres
memcached:
image: memcached
redis:
image: redis
ports:
- '6379:6379'
backend-base:
build:
context: .
dockerfile: backend/Dockerfile-base
image: "/backend:base"
backend:
build:
context: .
dockerfile: backend/Dockerfile
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- gunicorn backend.wsgi:application -b 0.0.0.0:8000 -k gevent -w 3
ports:
- 8000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
celery:
image: "/backend:${ENV:-local}"
command: ./wait-for-it.sh db:5432 -- celery worker -E -B --loglevel=INFO --concurrency=1
environment:
C_FORCE_ROOT: "yes"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend-base:
build:
context: .
dockerfile: frontend/Dockerfile-base
args:
NPM_REGISTRY: http://.view.build
PACKAGE_INSTALLER: yarn
image: "/frontend:base"
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
image: "/frontend:${ENV:-local}"
command: 'bash -c ''gulp'''
working_dir: /app/user
environment:
PORT: 3000
links:
- db
- redis
- memcached
depends_on:
- db
- redis
- memcached
Then configure the urls to redis, postgres, memcached, etc. with:
redis://redis:6379/0
postgres://user:pass#db:5432/database
The issue for me was that all of the containers, including celery had a network argument specified. If this is the case the redis container must also have the same argument otherwise you will get this error. See below, the fix was adding 'networks':
redis:
image: redis:alpine
ports:
- '6379:6379'
networks:
- server
I have django application with some model. I have manage.py command that creates n models and saves it to db. It runs with decent speed on my host machine.
But if I run it in docker it runs very slow, 1 instance created and saved in 40-50 seconds. I think I am missing something on how Docker works, can somebody point out why performance is low and what can i do with it?
docker-compose.yml:
version: '2'
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- /usr/local/var/postgres:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
dockerfile for web service:
FROM python:3.6
ENV PYTHONBUFFERED 1
ADD . .
WORKDIR .
RUN pip install -r requirements.txt
RUN chmod +x wait-for-it.sh
The problem here is most likely the volume /usr/local/var/postgres:/var/lib/postgresql as you are using it on Mac. As I understand the Docker for Mac solution, it uses file sharing to implement host volumes, which is a lot slower then native filesystem access.
A possible workaround is to use a docker volume instead of a host volume. Here is an example:
version: '2'
volumes:
postgres_data:
services:
db:
restart: always
image: "postgres:9.6"
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
web:
build: .
command: bash -c "./wait-for-it.sh db:5432 --timeout=15; python manage.py migrate; python manage.py runserver 0.0.0.0:8000; python manage.py mock 5"
ports:
- "8000:8000"
expose:
- "8000"
depends_on:
- db
Please note that this may complicate management of the postgres data, as you can't simply access the data from your Mac. You can only use the docker CLI or containers to access, modify and backup this data. Also, I'm not sure what happens if you uninstall Docker from your Mac, it may be that you lose this data.
Two things, can be a probable cause:
Starting of docker container takes some time, so if you start new container for each instance this can add up.
What storage driver do you use? Docker (often) defaults to device mapper loopback storage driver, which is slow. Here is some context. This will be painfull especially if you start this container often.
Other than that your config looks sensibly, and there are no obvious causes problems there. So if the above two points don't apply to you, please add some extra comments --- like how you actually add these model instances.
I'm trying to find a good way to populate a database with initial data for a simple application. I'm using a tutorial from realpython.com as a starting point. I then run a simple python script after the database is created to add a single entry, but when I do this the data is added multiple times even though I only call the script once. result
population script (test.py):
from app import db
from models import *
t = Post("Hello 3")
db.session.add(t)
db.session.commit()
edit:
Here is the docker-compose file which i use to build the project:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
it references two different Dockerfiles:
Dockerfile #1 which builds the App container and is 1 line:
FROM python:3.4-onbuild
Dockerfile #2 is used to build the nginx container
FROM tutum/nginx
RUN rm /etc/nginx/sites-enabled/default
ADD sites-enabled/ /etc/nginx/sites-enabled
edit2:
Some people have suggested that the data was persisting over several runs, and that was my initial thought as well. This is not the case, as I remove all active docker containers via docker rm before testing. Also the number of "extra" data is not consistent, ranging randomly from 3-6 in the few tests that I have run so far.
It turns out this is a bug related to using the run command on containers with the "restart: always" instruction in the docker-compose/Dockerfile. In order to resolve this issue without a bug fix I removed the "restart: always" from the web container.
related issue: https://github.com/docker/compose/issues/1013