entrypoint.prod.sh file not found (Docker python buster image) - python

I'm getting this issue with my entrypoint.prod.sh file that it doesn't exist even though I have echoed "ls" command and it shows me that the file is present in the right location with the right permissions but still docker isn't able to find it. I have tried many solutions but none are working. Any suggestion/help would be much appreciated. Let me know if you guys need any extra information from me.
so this is my main docker-compose.staging.yml file: -
version: '3'
services:
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app
- static_volume:/app/django/staticfiles
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- postgresql
stdin_open: true
tty: true
env_file:
- ./.env.staging
postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=sparrowteams
- POSTGRES_PASSWORD=sparrowteams
- POSTGRES_DB=sparrowteams
ports:
- 5432:5432
volumes:
- .:/data
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/app/django/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- django
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
static_volume:
certs:
html:
vhost:
Then I have my Dockerfile.prod: -
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.1-buster as builder
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2 && apt-get -y install nginx
# lint
RUN pip install --upgrade pip
COPY ./backend .
# install dependencies
COPY backend/requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.1-buster
# create directory for the app user
RUN mkdir -p /app
# create the appropriate directories
ENV HOME=/app
ENV APP_HOME=/app/django
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY docker-compose/django/entrypoint.prod.sh $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY ./backend $APP_HOME
RUN echo $(ls -la)
RUN sed -i 's/\r$//' $APP_HOME/entrypoint.prod.sh && \
chmod +x $APP_HOME/entrypoint.prod.sh
ENTRYPOINT ["/bin/bash", "/app/django/entrypoint.prod.sh"]
and then finally I have my entrypoint.prod.sh file (Which is actually giving an error that it doesn't exist.)
#!/bin/bash
set -e
gunicorn SparrowTeams.wsgi:application --bind 0.0.0.0:8000
My nginx/vhost.d/default file: -
location /staticfiles/ {
alias /app/django/staticfiles/;
add_header Access-Control-Allow-Origin *;
}
nginx/custom.conf: -
client_max_body_size 10M;
nginx/dockerfile: -
FROM jwilder/nginx-proxy
COPY vhost.d/default /etc/nginx/vhost.d/default
COPY custom.conf /etc/nginx/conf.d/custom.conf
My project structure looks something like this: -
- SparrowTeams (Main folder)
- backend
- SparrowTeams (Django project folder)
- docker-compose
- django
- Dockerfile.prod
- entrypoint.prod.sh
- nginx
- vhost.d
- default
- custom.conf
- dockerfile
- .env.staging
- docker-compose.staging.yml (Docker compose file that I'm running)

Your issue is that you have a volume that you mount to /app in your docker-compose file. That overrides the /app directory in your container and that's why it can't find the script.
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app <==== This volume
- static_volume:/app/django/staticfiles
You can either change the name of the directory you mount ./backend to (that's what I'd do), or you can place your app in another directory in your final image. The problem is caused by both of them being called /app.

Related

PyCharm debugger can't connect to Django app inside docker

I try to debug a Django app inside Docker container, the app is launched under uWSGI. Unfortunately, PyCharm debugger can't connect to the container and stops by timeout.
My run configuration:
I've added up --build to run all containers in debug mode.
docker-compose.yml:
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.10.7-management-alpine
container_name: bo-rabbitmq
rsyslog:
build:
context: .
dockerfile: docker/rsyslog/Dockerfile
image: bo/rsyslog:latest
container_name: bo-rsyslog
platform: linux/amd64
env_file:
- .env
volumes:
- shared:/app/mnt
api:
build:
context: .
dockerfile: docker/api/Dockerfile
image: bo/api:latest
container_name: bo-api
platform: linux/amd64
ports:
- "8081:8081"
- "8082:8082"
env_file:
- .env
volumes:
- shared:/app/mnt
apigw:
build:
context: .
dockerfile: docker/apigw/Dockerfile
image: bo/apigw:latest
container_name: bo-apigw
platform: linux/amd64
ports:
- "8080:8080"
env_file:
- .env
volumes:
- shared:/app/mnt
depends_on:
- api
volumes:
shared:
Dockerfile (for api):
FROM nexus.custom.ru/base/python27:2.7.17 # CentOS 7 with Python 2.7
# Environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PYTHONPATH /app/
ENV PATH /app/:$PATH
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1
# Install required software
RUN yum -y install enchant
# Working directory
WORKDIR /app
# Install and configure Poetry
RUN pip install --no-cache-dir poetry \
&& poetry config virtualenvs.create false
# Install project dependencies
COPY pyproject.toml .
COPY poetry.lock .
RUN poetry install --no-root --no-interaction
# Copy project files
COPY . .
COPY docker/api/manage.py ./
COPY docker/api/settings.py ./apps/adm/
COPY docker/api/config.py ./apps/adm/
COPY docker/api/config/development.yml ./config/
COPY docker/api/config/uwsgi/uwsgi.yml ./config/uwsgi/
COPY docker/api/entrypoint.sh ./
# Allow execution
RUN chmod +x /app/entrypoint.sh
# Entrypoint
ENTRYPOINT /app/entrypoint.sh
entrypoint.sh:
#!/bin/sh
# Create required directories
mkdir -p /app/mnt/spooler
mkdir -p /app/mnt/logs
mkdir -p /app/mnt/run
mkdir -p /app/mnt/shared/static
mkdir -p /app/mnt/protected_media
mkdir -p /app/mnt/htdocs
# Copy static
cp -r -n /app/static /app/mnt/shared/static
# Run uWSGI
uwsgi --yml=/app/config/uwsgi/uwsgi.yml
uwsgi.yml:
uwsgi:
chdir: /app
master: true
procname-master: b::master
procname: b::worker
processes: 2
threads: 4
listen: 128
max-requests: 1024
buffer-size: 16384
reload-on-exception: false
master-fifo: /app/mnt/run/running.fifo
vacuum: false
lazy-apps: true
enable-threads: true
pythonpath: /app
http: :8081
env: DJANGO_SETTINGS_MODULE=apps.adm.settings
module: apps.adm.wsgi
stats: :8082
stats-http: true
memory-report: 1
disable-logging: 0
log-5xx: true
log-4xx: true
log-slow: 500
What am I doing wrongly? Is it possible to connect PyCharm to Django app with uWSGI inside docker?

Permission denied after creating django app inside docker container

So I am following this tutorial and have gotten all the way to the 'media' section and when I run the command:
docker-compose exec web python manage.py startapp upload
it all works fine but when I open the newly created views.py file and edit and try to save I get a permission denied error. I can open the file as root and edit it but now thru my Atom code editor. I don't know where I am going wrong can someone help me? Here's my code:
Dockerfile:
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
docker-compose.yml:
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
volumes:
postgres_data:
entrypoint.sh:
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
# python manage.py flush --no-input
# python manage.py migrate
exec "$#"
try to issue chmod 777 -R in the folder where it is located.

Docker-compose cannot find file manage.py in runserver command

I what to dockerize my django app,
i create my Dockerfile :
FROM python:3.6-alpine
RUN apk add --no-cache linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /DEV
WORKDIR /DEV
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . .
at this point i create my docker-compose.yml:
version: '3'
networks:
mynetwork:
driver: bridge
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
networks:
- mynetwork
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypass
POSTGRES_DB: mydb
volumes:
- ./data:/var/lib/postgresql/data
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
networks:
- mynetwork
volumes:
- .:/DEV
ports:
- "8000:8000"
depends_on:
- db
then i create a .dockerignore file:
# Ignore
.DS_Store
.idea
.venv2
__pycache__
!manage.py
*.py[cod]
*$py.class
*.so
.Python
*.log
docker-compose.yml
Dockerfile
geckodriver.log
golog.py
golog.pyc
log.html
media
out
output.xml
report.html
startup.sh
templates
testlibs
.dockerignore
well, at this point i run:
docker-compose build --no-cache
at the end image was build correctly, but when i run:
docker-compose up
system return this error:
web_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
core_web_1 exited with code 2
Someone can help me about the issue?
so many thanks in advance
Try making your Dockerfile more explicit with the locations and then change your docker-compose as well:
FROM python:3.6-alpine
RUN apk add --no-cache linux-headers libffi-dev jpeg-dev zlib-dev
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /DEV
WORKDIR /DEV
COPY ./requirements.txt /DEV/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENV PYTHONUNBUFFERED 1
COPY . /DEV/
web:
build: .
command: python /DEV/manage.py runserver 0.0.0.0:8000
networks:
- mynetwork

How can I reload python code in dockerized django?

I am using docker-compose to run 3 containers:
django + gunicorn, nginx, and postgresQL.
Every time I change my python .py code, I will run docker-compose restart web but it takes a long time to restart.
I try to restart gunicorn with
`docker-compose exec web ps aux |grep gunicorn | awk '{ print $2 }' |xargs kill -HUP`
But it didn't work.
How can I reload .py code in a shorter time?
I know that gunicorn can be set to hot reload python code. Can I do it manually with a command?
My docker-compose.yml:
version: '3'
services:
db:
build: ./db/
volumes:
- dbdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
web:
build: .
command: >
sh -c "gunicorn abc.wsgi -b 0.0.0.0:8000"
# sh -c "python manage.py collectstatic --noinput &&
# python manage.py loaddata app/fixtures/masterData.json &&
# gunicorn abc.wsgi -b 0.0.0.0:8000"
volumes:
- .:/var/www/django
- ./static:/static/
expose:
- "8000"
environment:
- USE_S3
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_STORAGE_BUCKET_NAME
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- SENDGRID_API_KEY
- SECRET_KEY
depends_on:
- db
nginx:
restart: always
build: ./nginx/
volumes:
- ./static:/static/
ports:
- "8000:80"
links:
- web
backup:
image: prodrigestivill/postgres-backup-local:11-alpine
restart: always
volumes:
- /var/opt/pgbackups:/backups
links:
- db
depends_on:
- db
environment:
- POSTGRES_HOST
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- SCHEDULE
- BACKUP_KEEP_DAYS
- BACKUP_KEEP_WEEKS
- BACKUP_KEEP_MONTHS
- HEALTHCHECK_PORT
volumes:
dbdata:
Dockerfile - web:
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
ENV WKHTML2PDF_VERSION 0.12.4
# 0.12.5 wget not work
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /var/www
RUN mkdir /var/www/django
WORKDIR /var/www/django
ADD requirements.txt /var/www/django/
RUN apt-get update && apt-get install -y \
libpq-dev \
python-dev \
gcc \
openssl \
build-essential \
xorg \
libssl1.0-dev \
wget
RUN apt-get install -y sudo
RUN pip install --upgrade pip
RUN pip install -r requirements.txt && pip3 install requests && pip3 install pdfkit
# & pip3 install sendgrid-django
ADD . /var/www/django/
WORKDIR /var/www
RUN wget "https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/${WKHTML2PDF_VERSION}/wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
RUN tar -xJf "wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
WORKDIR wkhtmltox
RUN sudo chown root:root bin/wkhtmltopdf
RUN sudo cp -r * /usr/
WORKDIR /var/www/django
Dockerfile - nginx:
FROM nginx
# Copy configuration files to the container
COPY default.conf /etc/nginx/conf.d/default.conf
After a while, I found that this line is having a problem:
gunicorn abc.wsgi -b 0.0.0.0:8000
This line will put gunicorn running a subprocess. When I tried to submit a HUP signal to gunicorn, only the master process can receive the signal. So gunicorn was not killed. The code cannot be reloaded.
What I do is adding "exec" before it and Gunicorn will be run as the master process. And I can use "docker-compose kill -s HUP web" to gracefully restart gunicorn and my code will be reloaded in the container

Run Python Console via docker-compose on Pycharm

I'm having some problems running pycharm with a remote python interpreter via docker-compose. Everything works just great except Python console when I press the run button it just shows the following message:
"Error: Unable to locate container name for service "web" from
docker-compose output"
I really can't understand why it keeps me showing that if my docker-compose.yml provides a web service.
Any help?
EDIT:
docker-compose.yml
version: '2'
volumes:
dados:
driver: local
media:
driver: local
static:
driver: local
services:
beat:
build: Docker/beat
depends_on:
- web
- worker
restart: always
volumes:
- ./src:/app/src
db:
build: Docker/postgres
ports:
- 5433:5432
restart: always
volumes:
- dados:/var/lib/postgresql/data
jupyter:
build: Docker/jupyter
command: jupyter notebook
depends_on:
- web
ports:
- 8888:8888
volumes:
- ./src:/app/src
python:
build:
context: Docker/python
args:
REQUIREMENTS_ENV: 'dev'
image: helpdesk/python:3.6
redis:
image: redis:3.2.6
ports:
- 6379:6379
restart: always
web:
build:
context: .
dockerfile: Docker/web/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- python
- db
ports:
- 8001:8000
restart: always
volumes:
- ./src:/app/src
worker:
build: Docker/worker
depends_on:
- web
- redis
restart: always
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.6
# Set requirements environment
ARG REQUIREMENTS_ENV
ENV REQUIREMENTS_ENV ${REQUIREMENTS_ENV:-prod}
# Set PYTHONUNBUFFERED so the output is displayed in the Docker log
ENV PYTHONUNBUFFERED=1
# Install apt-transport-https
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-transport-https
# Configure yarn repo
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install APT dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
locales \
openssl \
yarn
# Set locale
RUN locale-gen pt_BR.UTF-8 && \
localedef -i pt_BR -c -f UTF-8 -A /usr/share/locale/locale.alias pt_BR.UTF-8
ENV LANG pt_BR.UTF-8
ENV LANGUAGE pt_BR.UTF-8
ENV LC_ALL pt_BR.UTF-8
# Copy requirements files to the container
RUN mkdir -p /tmp/requirements
COPY requirements/requirements-common.txt \
requirements/requirements-$REQUIREMENTS_ENV.txt \
/tmp/requirements/
# Install requirements
RUN pip install \
-i http://root:test#pypi.defensoria.to.gov.br:4040/root/pypi/+simple/ \
--trusted-host pypi.defensoria.to.gov.br \
-r /tmp/requirements/requirements-$REQUIREMENTS_ENV.txt
# Remove requirements temp folder
RUN rm -rf /tmp/requirements
This is the python image Dockerfile, the web Dockerfile just declares from this image and copies the source folder to the container.
I think that this is an dependency chain problem, web depends on python so, when the python container gets up, web one still not exists. That may cause the error.
Cheers
Installing required libraries via command line and running the python interpreter from the PATH should suffice.
You can also refer to the JetBrains manual, as to how they have configured for the interpreters of their IDEs.

Categories