Gitlab CI/CD deploy Django Docker container - python

I have been tring to setup gitlab ci/cd config for a django project which will be deployed as a container.
This is what i have tried:
CI/CD -
image: creatiwww/docker-compose:latest
services:
- docker:dind
stages:
- lint
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
lint:
stage: lint
image: python:3.8
before_script:
- pip install pipenv
- pipenv install --dev
script:
- pipenv run python -m flake8 --exclude=migrations,settings.py backend
allow_failure: false
build:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- echo "IMAGE_APP_TAG=$TAG_LATEST" >> .env
- docker-compose build
- docker-compose push
only:
- master
deploy-to-prod:
stage: deploy
script:
- eval $(ssh-agent -s)
- echo "${ID_RSA}" | tr -d '\r' | ssh-add - > /dev/null
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- echo "IMAGE_APP_TAG=$TAG_LATEST" >> .env
- echo "SECRET_KEY=$SECRET_KEY" >> .env
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" down --remove-orphans
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" pull
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" up -d
only:
- master
when: manual
The pipeline succeds but while checking the log of container i get following output-
python: can't open file 'manage.py': [Errno 2] No such file or directory
also my image field in docker ps is empty.
Please help

Put the code in Docker_Compose.yml
version: '3.7'
services:
backend:
build: ./project_name
command: sh -c "cd project && python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
depends_on:
- db
network_mode: host
db:
image: postgres:12.0-alpine
network_mode: host
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER='db_user'
- POSTGRES_PASSWORD='db_password'
- POSTGRES_DB='db_name'
volumes:
postgres_data:

Related

django test failed to start in gitlab ci

i used docker and django for this project and gitlab ci/cd pipleline and test wont even start and exit below error:
tests was running until i add some tests in django app and after that it failed.
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
here is my Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
my docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
and my gitlab-ci.yml:
image: python:latest
services:
- mysql:latest
- postgres:latest
variables:
POSTGRES_DB: postgres
cache:
paths:
- ~/.cache/pip/
test:
variables:
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/$POSTGRES_DB"
script:
- pip install -r requirements.txt
- python manage.py test
build:
image: docker:19.03.12
stage: build
services:
- docker:19.03.12-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
Create a network for the containers in your docker-compose file and share the network between your app and db.
something like this
db:
networks:
- network_name
# your other db setup follows
web:
networks:
- network_name
# your other web setup follows
networks:
network_name:

Unable to install r package inside django docker container

I'm developing a web service by cookie-cutter django
For some reason, I have to call R-script to response user requests. So, at first, I tried to add a R's Dockerfile into local.yml. (in the last section)
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: webservice_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: webservice_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: webservice_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./webservice:/app/webservice:z
ports:
- "7000:7000"
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: webservice_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: webservice_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: webservice_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
R:
image: r_local
container_name: r_local
build:
context: .
dockerfile: ./compose/local/R/Dockerfile
And here is the ./compose/local/R/Dockerfile
FROM r-base
WORKDIR /app
ADD . /app
RUN Rscript -e 'install.packages("dplyr")'
RUN Rscript -e 'install.packages("xgboost")'
RUN Rscript -e 'install.packages("TSrepr")'
RUN Rscript -e 'install.packages("ggplot2")'
RUN Rscript -e 'install.packages("foreach")'
RUN Rscript -e 'install.packages("doParallel")'
But when I run docker-compose -f local.yml up
Some errors occurred
r_local | Fatal error: you must specify '--save', '--no-save' or '--vanilla'
r_local exited with code 2
I googled it and found some discussion
But it mentioned Rserve. I don't think I used it.
So I tried another way that installed R in the django container directly.
sudo docker-compose -f local.yml run --rm django apt-get update
sudo docker-compose -f local.yml run --rm django apt install r-base-core -y
sudo docker-compose -f local.yml run --rm django apt-get install r-cran-quantre\
g
sudo docker-compose -f local.yml run --rm django apt-get install r-cran-sparsem
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
dplyr")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
xgboost")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
TSrepr")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
ggplot2")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
foreach")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
doParallel")'
Then, I got
E: Unable to locate package r-base-core
E: Unable to locate package r-cran-quantreg
E: Unable to locate package r-cran-sparsem
/entrypoint: line 45: exec: Rscript: not found
/entrypoint: line 45: exec: Rscript: not found
...
(many lines are ommited for clarity)
Is there any simple way to call Rscript in the django docker's enviroment? Many thanks.

Error running migrations in a docker container

I am getting this error when trying to run migrations in my container. I cannot seem to figure out why.
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"alembic\": executable file not found in $PATH": unknown
Dockerfile:
FROM python:3.8.2
WORKDIR /workspace/
COPY . .
RUN pip install pipenv
RUN pipenv install --deploy --ignore-pipfile
#EXPOSE 8000
#CMD ["pipenv", "run", "python", "/workspace/bin/web.py"]
Docker-Compose:
version: '3'
services:
db:
image: postgres:12
ports:
- "5432:5432"
env_file:
- .env.database.local
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- db
redis:
image: "redis:alpine"
web:
build: .
environment:
- PYTHONPATH=/workspace
env_file:
- .env.local
ports:
- "8000:8000"
volumes:
- .:/workspace
depends_on:
- db
- redis
command: "alembic upgrade head && pipenv run python /workspace/bin/web.py"
The command I run when I encounter this problem:
docker-compose run web alembic revision - autogenerate -m "First migration"
I defined in my Dockerfile that all my program will be running in the workspace directory. So it should point to it.
Yes the issue was that I did not add it to my $PATH.
This is what I added inside my docker-compose:
- PATH=/directory/bin:$PATH
docker-compose run web pipenv run alembic revision - autogenerate -m "First migration"
or
change in Dockerfile
RUN pipenv install --deploy --ignore-pipfile --system
and run
docker-compose run web alembic revision - autogenerate -m "First migration"

Run Selenium test in Docker container

I have a task to run Python selenium tests in docker.
First of all I run Selenium grid with docker-compose:
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-gold
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome-debug
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
ports:
- 4577
firefox:
image: selenium/node-firefox-debug
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
ports:
- 4578
Then I build container where I install pytest and copy my test framework
FROM python:3.7-alpine3.8
COPY . .
RUN pip install -r requirements.txt
# update apk repo
RUN echo "http://dl-4.alpinelinux.org/alpine/v3.8/main" >> /etc/apk/repositories && \
echo "http://dl-4.alpinelinux.org/alpine/v3.8/community" >> /etc/apk/repositories
# install chromedriver
RUN apk update
RUN apk add chromium chromium-chromedriver
EXPOSE 3000
ENV PORT 3000
# install selenium
RUN pip install selenium==3.13.0
Then I tried to run my test file inside container:
docker run 3ae7b37d8a7f pytest tests/basic_smoke_test.py --browser docker -v
A a result I see network error:
urllib.error.URLError: <urlopen error [Errno 99] Address not
available>

Run Python Console via docker-compose on Pycharm

I'm having some problems running pycharm with a remote python interpreter via docker-compose. Everything works just great except Python console when I press the run button it just shows the following message:
"Error: Unable to locate container name for service "web" from
docker-compose output"
I really can't understand why it keeps me showing that if my docker-compose.yml provides a web service.
Any help?
EDIT:
docker-compose.yml
version: '2'
volumes:
dados:
driver: local
media:
driver: local
static:
driver: local
services:
beat:
build: Docker/beat
depends_on:
- web
- worker
restart: always
volumes:
- ./src:/app/src
db:
build: Docker/postgres
ports:
- 5433:5432
restart: always
volumes:
- dados:/var/lib/postgresql/data
jupyter:
build: Docker/jupyter
command: jupyter notebook
depends_on:
- web
ports:
- 8888:8888
volumes:
- ./src:/app/src
python:
build:
context: Docker/python
args:
REQUIREMENTS_ENV: 'dev'
image: helpdesk/python:3.6
redis:
image: redis:3.2.6
ports:
- 6379:6379
restart: always
web:
build:
context: .
dockerfile: Docker/web/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- python
- db
ports:
- 8001:8000
restart: always
volumes:
- ./src:/app/src
worker:
build: Docker/worker
depends_on:
- web
- redis
restart: always
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.6
# Set requirements environment
ARG REQUIREMENTS_ENV
ENV REQUIREMENTS_ENV ${REQUIREMENTS_ENV:-prod}
# Set PYTHONUNBUFFERED so the output is displayed in the Docker log
ENV PYTHONUNBUFFERED=1
# Install apt-transport-https
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-transport-https
# Configure yarn repo
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install APT dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
locales \
openssl \
yarn
# Set locale
RUN locale-gen pt_BR.UTF-8 && \
localedef -i pt_BR -c -f UTF-8 -A /usr/share/locale/locale.alias pt_BR.UTF-8
ENV LANG pt_BR.UTF-8
ENV LANGUAGE pt_BR.UTF-8
ENV LC_ALL pt_BR.UTF-8
# Copy requirements files to the container
RUN mkdir -p /tmp/requirements
COPY requirements/requirements-common.txt \
requirements/requirements-$REQUIREMENTS_ENV.txt \
/tmp/requirements/
# Install requirements
RUN pip install \
-i http://root:test#pypi.defensoria.to.gov.br:4040/root/pypi/+simple/ \
--trusted-host pypi.defensoria.to.gov.br \
-r /tmp/requirements/requirements-$REQUIREMENTS_ENV.txt
# Remove requirements temp folder
RUN rm -rf /tmp/requirements
This is the python image Dockerfile, the web Dockerfile just declares from this image and copies the source folder to the container.
I think that this is an dependency chain problem, web depends on python so, when the python container gets up, web one still not exists. That may cause the error.
Cheers
Installing required libraries via command line and running the python interpreter from the PATH should suffice.
You can also refer to the JetBrains manual, as to how they have configured for the interpreters of their IDEs.

Categories