Compose up container exited with code 0 and logs it with empty - python

I need to containerize a Django Web project with docker. I divided the project into dashboard, api-server and database. When I type docker-compose up, it print api-server exited with code 0 and api-server container Exited (0), and I type docker logs api-server, it return empty, but other container normal. I don't know how to check problem.
api-server directory structure is as follows
api-server
server/
Dockerfile
requirements.txt
start.sh
...
...
Some compose yml content is as follows
dashboard:
image: nginx:latest
container_name: nginx-dashboard
volumes:
- /nginx/nginx/default:/etc/nginx/conf.d/default.conf:ro
- /nginx/dist:/var/www/html:ro
ports:
- "80:80"
depends_on:
- api-server
api-server:
build: /api-server
container_name: api-server
volumes:
- /api-server:/webapps
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: Postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5432:5432"
Some Dockerfile content of api-server is as follows
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get install -y python3-pip libpq-dev apt-utils
COPY ./requirements.txt /webapps/
RUN pip3 install -r /webapps/requirements.txt
COPY . /webapps/
CMD ["bash","-c","./start.sh"]
start.sh is as follows
#!/usr/bin/env bash
cd server/
python manage.py runserver 0.0.0.0:8000
type docker-compose up result as follows
root#VM:/home/test/Documents/ComposeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating Postgres ... done
Creating api-server ... done
Creating dashboard ... done
Attaching to Postgres, api-server, dashboard
Postgres | The files belonging to this database system will be owned by user "postgres".
Postgres | This user must also own the server process.
...
...
api-server exited with code 0
api-server exited with code 0
docker logs api-server is empty
I would very appreciate it if you guys can tell me how to check this problems, It is better to provide a solution.

You are already copying api-server to Dockerfile during build time which should work fine, but in Docker compose it all override all the pip packages and code.
volumes:
- /api-server:/webapps
Remove the volume from your Docker compose and it should work.
Second thing set permission to the bash script.
COPY . /webapps/
RUN chmod +x ./start.sh
Third thing, you do need to run python using bash as there is no thing in the bash that CMD can not perform so why not as a CMD?
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Related

Dockerized Django app and MySQL with docker-compose using .env

I would to run my Django project into a Docker container with its Database on another Docker container inside a Bebian
When i run my docker container, I have some errors. Like : Lost connection to MySQL server during query ([Errno 104] Connection reset by peer).
This command mysql > SET GLOBAL log_bin_trust_function_creators = 1 is very important because database's Django user create trigger.
Morever, I use a .env file used same for create DB image to store DB user and password. This path is settings/.env.
My code:
docker-compose.yml:
version: '3.3'
services:
db:
image: mysql:8.0.29
container_name: db_mysql_container
environment:
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
command: ["--log_bin_trust_function_creators=1"]
ports:
- '3306:3306'
expose:
- '3306'
api:
build: .
container_name: django_container
command: bash -c "pip install -q -r requirements.txt &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- '8000:8000'
depends_on:
- db
Dockerfile :
# syntax=docker/dockerfile:1
FROM python:3.9.14-buster
ENV PYTHONUNBUFFERED=1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
How to start my Django project ? Is possible to start only the DB container ?
What command i need execute and what changes i need to make, I'm novice with Docker ! So if you help me, please explains your commands and actions !
You can find this project on my GitHub
Thank !
To run dockerized django project.
Simply you can run below command:
docker-compose run projectname bash -c "python manage.py createsuperuser"
Above command is used for to create superuser

Docker-Compose Output File To Local Host

I have the below docker-compose.yaml file that sets up a database and runs a python script
version: '3.3'
services:
db:
image: mysql:8.0
cap_add:
- SYS_NICE
restart: always
environment:
- MYSQL_DATABASE=test_db
- MYSQL_ROOT_PASSWORD=xxx
ports:
- '3310:3310'
volumes:
- db:/var/lib/mysql
py_service:
container_name: test_py
build: .
command: ./main.py -r compute_init
depends_on:
- db
ports:
- 80:80
environment:
DB_HOST: db
DB_PORT: 3306
DB_USER: root
DB_PASSWORD: xxx
DB_NAME: test_db
links:
- db
volumes:
- py_output:/app/output
volumes:
db:
driver: local
py_output:
To run it I perform the following
docker-compose build
docker-compose up
docker-compose run -v /home/ubuntu/docker_directory/output:/app/output/* py_service
Here is the Dockerfile
FROM python:3.7
RUN mkdir /app
WORKDIR /app
COPY env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python3","main.py","-r","compute_init"]
Now this works fine I can see the data has been properly populated under the generated in the msql database.
The python file at the end of the script should dump a csv file to /app/ouput/output.csv (via pandas library df.to_csv("output/output.csv"))
My question is, how to recover that csv from the container to the local directory.
The script seems to finish off without any errors, but can't find the output file at the end.
it seems using docker-compose run -v $(pwd)/output:/app/output py_service
did the job

Running Django's collectstatic in Dockerfile produces empty directory

I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile:
# Set up image
FROM python:3.10
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install poetry and identify Python dependencies
RUN pip install poetry
COPY pyproject.toml /usr/src/app/
# Install Python dependencies
RUN set -x \
&& apt update -y \
&& apt install -y \
libpq-dev \
gcc \
&& poetry config virtualenvs.create false \
&& poetry install --no-ansi
# Copy source into image
COPY . /usr/src/app/
# Collect static files
RUN python -m manage collectstatic -v 3 --no-input
And here's the docker-compose.yml file I used to run the image:
services:
db:
image: postgres
env_file:
- .env.docker.db
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
ports:
- "5433:5432"
web:
build: .
restart: always
env_file:
- .env.docker.web
ports:
- "8001:$PORT"
volumes:
- .:/usr/src/app
depends_on:
- db
networks:
- backend
command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi
volumes:
db:
networks:
backend:
driver: bridge
The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts.
I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on?
UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line:
volumes:
- .:/usr/src/app
If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows:
command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi"
Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.
You are overriding the content of /usr/src/app in your container when you added the
volumes:
- .:/usr/src/app
to your docker compose file.
Remove it since you already copied everything during the build.

How to stop a docker database container

Trying to run the following docker compose file
version: '3'
services:
database:
image: postgres
container_name: pg_container
environment:
POSTGRES_USER: partman
POSTGRES_PASSWORD: partman
POSTGRES_DB: partman
app:
build: .
container_name: partman_container
links:
- database
environment:
- DB_NAME=partman
- DB_USER=partman
- DB_PASSWORD=partman
- DB_HOST=database
- DB_PORT=5432
- SECRET_KEY='=321t+92_)#%_4b+f-&0ym(fs2p5-0-_nz5mhb_cak9zlo!bv#'
depends_on:
- database
expose:
- "8000"
- "8020"
ports:
- "127.0.0.1:8020:8020"
volumes:
pgdata: {}
when running docker-compose up-build with the following docker file
# Dockerfile
# FROM directive instructing base image to build upon
FROM python:3.7-buster
RUN apt-get update && apt-get install nginx vim -y --no-install-recommends
COPY nginx.default /etc/nginx/sites-available/default
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
RUN mkdir .pip_cache \
mkdir -p /opt/app \
mkdir -p /opt/app/pip_cache \
mkdir -p /opt/app/py-partman
COPY start-server.sh /opt/app/
COPY requirements.txt start-server.sh /opt/app/
COPY .pip_cache /opt/app/pip_cache/
COPY partman /opt/app/py-partman/
WORKDIR /opt/app
RUN pip install -r requirements.txt --cache-dir /opt/app/pip_cache
RUN chown -R www-data:www-data /opt/app
RUN /bin/bash -c 'ls -la; chmod +x /opt/app/start-server.sh; ls -la'
EXPOSE 8020
STOPSIGNAL SIGTERM
CMD ["/opt/app/start-server.sh"]
/opt/app/start-server.sh :
#!/usr/bin/env bash
# start-server.sh
ls
pwd
cd py-partman
ls
pwd
python manage.py createsuperuser --no-input
python manage.py makemigrations
python manage.py migrate
python manage.py initialize_entities
the database image keeps on running, i want to stop it because otherwise the jenkins job will keep on waiting for the image to terminate.
Any good ideas / better ideas how to do so ?
Maybe with -> docker stop <"container id or container name">
Use -f to force it, if it can't be stopped.
Try it.
Docker Compose is generally oriented around long-running server-type processes, and where database containers can frequently take 30-60 seconds to start up, it's usually beneficial to not repeat them. (In fact, the artifacts you show look a little odd for not including a python manage.py runserver command.)
It looks like there is a docker-compose up option for what you're looking for
docker-compose up --build --abort-on-container-exit
If you wanted to do this more manually, and especially if your app container's normal behavior is to actually start a server, you can docker-compose run the initialization command. This will start up the container and its dependencies, but it also expects its command to return, and then you can clean up yourself.
docker-compose build
docker-compose run app /opt/app/initialize-only.sh
docker-compose down -v

Applying changes in django/docker files

I'm new at the development with django and docker and I have a problem when I change a file in the project. My problem is as follows:
I make changes in the content of any file in the django project (Template, view, urls) but it does not update in my current running app. Always I want to see my changes I need to restart the server (I'm using nginx) doing docker-compose up.
Is there a package or a alteration that I should install/do to make it able to accept change in running time?
This is my Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
RUN pip install pipenv && pipenv install --system
RUN pip install django-livereload
COPY . /opt/services/djangoapp/src
RUN cd hello && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "hello", "hello.wsgi:application"]
Let me know any other information that I might provide to give a better glimpse of the problem (if it is not clear enough).
version: '3'
services:
# database containers, one for each db
database1:
image: postgres:10
volumes:
- database1_volume:/var/lib/postgresql/data
env_file:
- config/db/database1_env
networks:
- database1_network
# web container, with django + gunicorn
djangoapp:
build: .
environment:
- DJANGO_SETTINGS_MODULE
volumes:
- .:/opt/services/djangoapp/src
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
- .:/code
networks:
- database1_network
- nginx_network
depends_on:
- database1
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 8000:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static:/opt/services/djangoapp/static
- media:/opt/services/djangoapp/media
networks:
- nginx_network
depends_on:
- djangoapp
networks:
database1_network:
driver: bridge
database2_network:
driver: bridge
nginx_network:
driver: bridge
volumes:
database1_volume:
static:
media:
This is pretty simple. What happens here now
You have the Dockerfile and you COPY your current folder(at the time you build your image) to the container. So while you are running the container it DOES NOT sync with you host(current working folder) if you change something in the host after create the container.
If you want to sync your host with the container you have to mount it as volume with, either -v in single container or with volumes in docker compose.
docker run -v /host/directory:/container/directory
docker run -v ./:/opt/services/djangoapp/src
or using docker-compose if you have multiple containers
version: '3'
services:
web-service:
build: . # path to Dockerfile
image: your-image
volumes:
- /host/directory:/container/directory
#- ./:/opt/services/djangoapp/src

Categories