I'm having some problems running pycharm with a remote python interpreter via docker-compose. Everything works just great except Python console when I press the run button it just shows the following message:
"Error: Unable to locate container name for service "web" from
docker-compose output"
I really can't understand why it keeps me showing that if my docker-compose.yml provides a web service.
Any help?
EDIT:
docker-compose.yml
version: '2'
volumes:
dados:
driver: local
media:
driver: local
static:
driver: local
services:
beat:
build: Docker/beat
depends_on:
- web
- worker
restart: always
volumes:
- ./src:/app/src
db:
build: Docker/postgres
ports:
- 5433:5432
restart: always
volumes:
- dados:/var/lib/postgresql/data
jupyter:
build: Docker/jupyter
command: jupyter notebook
depends_on:
- web
ports:
- 8888:8888
volumes:
- ./src:/app/src
python:
build:
context: Docker/python
args:
REQUIREMENTS_ENV: 'dev'
image: helpdesk/python:3.6
redis:
image: redis:3.2.6
ports:
- 6379:6379
restart: always
web:
build:
context: .
dockerfile: Docker/web/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- python
- db
ports:
- 8001:8000
restart: always
volumes:
- ./src:/app/src
worker:
build: Docker/worker
depends_on:
- web
- redis
restart: always
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.6
# Set requirements environment
ARG REQUIREMENTS_ENV
ENV REQUIREMENTS_ENV ${REQUIREMENTS_ENV:-prod}
# Set PYTHONUNBUFFERED so the output is displayed in the Docker log
ENV PYTHONUNBUFFERED=1
# Install apt-transport-https
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-transport-https
# Configure yarn repo
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install APT dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
locales \
openssl \
yarn
# Set locale
RUN locale-gen pt_BR.UTF-8 && \
localedef -i pt_BR -c -f UTF-8 -A /usr/share/locale/locale.alias pt_BR.UTF-8
ENV LANG pt_BR.UTF-8
ENV LANGUAGE pt_BR.UTF-8
ENV LC_ALL pt_BR.UTF-8
# Copy requirements files to the container
RUN mkdir -p /tmp/requirements
COPY requirements/requirements-common.txt \
requirements/requirements-$REQUIREMENTS_ENV.txt \
/tmp/requirements/
# Install requirements
RUN pip install \
-i http://root:test#pypi.defensoria.to.gov.br:4040/root/pypi/+simple/ \
--trusted-host pypi.defensoria.to.gov.br \
-r /tmp/requirements/requirements-$REQUIREMENTS_ENV.txt
# Remove requirements temp folder
RUN rm -rf /tmp/requirements
This is the python image Dockerfile, the web Dockerfile just declares from this image and copies the source folder to the container.
I think that this is an dependency chain problem, web depends on python so, when the python container gets up, web one still not exists. That may cause the error.
Cheers
Installing required libraries via command line and running the python interpreter from the PATH should suffice.
You can also refer to the JetBrains manual, as to how they have configured for the interpreters of their IDEs.
Related
I'm trying to run a python program that uses MongoDB and I want to deploy it on a server, that's because I write a docker-compose file. My problem is that when I run the python project isolated with the docker build -t PROJET_NAME . and docker run image commands everything works properly, however when executing docker-compose up -d the python container restarts over and over again. What am I doing wrong?
I just tried to log it but nothing shows up
Here is the Dockerfile
FROM python:3.7
WORKDIR /app
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
ENV PROD=true
COPY . .
# Installing requirements
RUN pip install -r requirements.txt
RUN export PYTHONPATH=$PATHONPATH:`pwd`
CMD ["python3", "foo.py"]
And the docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
MONGODB_DATABASE: db
MONGODB_USERNAME: appuser
MONGODB_PASSWORD: mongopassword
MONGODB_HOSTNAME: mongodb
depends_on:
- mongodb
networks:
- internal
mongodb:
image: mongo
container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: mongodbuser
MONGO_INITDB_ROOT_PASSWORD: mongodbrootpassword
MONGO_INITDB_DATABASE: db
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db
networks:
- internal
networks:
internal:
driver: bridge
volumes:
mongodbdata:
driver: local
I'm getting this issue with my entrypoint.prod.sh file that it doesn't exist even though I have echoed "ls" command and it shows me that the file is present in the right location with the right permissions but still docker isn't able to find it. I have tried many solutions but none are working. Any suggestion/help would be much appreciated. Let me know if you guys need any extra information from me.
so this is my main docker-compose.staging.yml file: -
version: '3'
services:
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app
- static_volume:/app/django/staticfiles
environment:
CHOKIDAR_USEPOLLING: "true"
depends_on:
- postgresql
stdin_open: true
tty: true
env_file:
- ./.env.staging
postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=sparrowteams
- POSTGRES_PASSWORD=sparrowteams
- POSTGRES_DB=sparrowteams
ports:
- 5432:5432
volumes:
- .:/data
nginx-proxy:
container_name: nginx-proxy
build: nginx
restart: always
ports:
- 443:443
- 80:80
volumes:
- static_volume:/app/django/staticfiles
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- django
nginx-proxy-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
env_file:
- .env.staging.proxy-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/etc/nginx/certs
- html:/usr/share/nginx/html
- vhost:/etc/nginx/vhost.d
depends_on:
- nginx-proxy
volumes:
static_volume:
certs:
html:
vhost:
Then I have my Dockerfile.prod: -
###########
# BUILDER #
###########
# pull official base image
FROM python:3.9.1-buster as builder
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apt-get update && apt-get -y install libpq-dev gcc && pip install psycopg2 && apt-get -y install nginx
# lint
RUN pip install --upgrade pip
COPY ./backend .
# install dependencies
COPY backend/requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.9.1-buster
# create directory for the app user
RUN mkdir -p /app
# create the appropriate directories
ENV HOME=/app
ENV APP_HOME=/app/django
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -y libpq-dev
COPY --from=builder /app/wheels /wheels
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY docker-compose/django/entrypoint.prod.sh $APP_HOME/entrypoint.prod.sh
RUN chmod +x $APP_HOME/entrypoint.prod.sh
# copy project
COPY ./backend $APP_HOME
RUN echo $(ls -la)
RUN sed -i 's/\r$//' $APP_HOME/entrypoint.prod.sh && \
chmod +x $APP_HOME/entrypoint.prod.sh
ENTRYPOINT ["/bin/bash", "/app/django/entrypoint.prod.sh"]
and then finally I have my entrypoint.prod.sh file (Which is actually giving an error that it doesn't exist.)
#!/bin/bash
set -e
gunicorn SparrowTeams.wsgi:application --bind 0.0.0.0:8000
My nginx/vhost.d/default file: -
location /staticfiles/ {
alias /app/django/staticfiles/;
add_header Access-Control-Allow-Origin *;
}
nginx/custom.conf: -
client_max_body_size 10M;
nginx/dockerfile: -
FROM jwilder/nginx-proxy
COPY vhost.d/default /etc/nginx/vhost.d/default
COPY custom.conf /etc/nginx/conf.d/custom.conf
My project structure looks something like this: -
- SparrowTeams (Main folder)
- backend
- SparrowTeams (Django project folder)
- docker-compose
- django
- Dockerfile.prod
- entrypoint.prod.sh
- nginx
- vhost.d
- default
- custom.conf
- dockerfile
- .env.staging
- docker-compose.staging.yml (Docker compose file that I'm running)
Your issue is that you have a volume that you mount to /app in your docker-compose file. That overrides the /app directory in your container and that's why it can't find the script.
django:
build:
context: ./
dockerfile: docker-compose/django/Dockerfile.prod
expose:
- 8000
volumes:
- ./backend:/app <==== This volume
- static_volume:/app/django/staticfiles
You can either change the name of the directory you mount ./backend to (that's what I'd do), or you can place your app in another directory in your final image. The problem is caused by both of them being called /app.
I'm developing a web service by cookie-cutter django
For some reason, I have to call R-script to response user requests. So, at first, I tried to add a R's Dockerfile into local.yml. (in the last section)
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: webservice_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: webservice_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: webservice_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./webservice:/app/webservice:z
ports:
- "7000:7000"
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: webservice_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: webservice_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: webservice_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
R:
image: r_local
container_name: r_local
build:
context: .
dockerfile: ./compose/local/R/Dockerfile
And here is the ./compose/local/R/Dockerfile
FROM r-base
WORKDIR /app
ADD . /app
RUN Rscript -e 'install.packages("dplyr")'
RUN Rscript -e 'install.packages("xgboost")'
RUN Rscript -e 'install.packages("TSrepr")'
RUN Rscript -e 'install.packages("ggplot2")'
RUN Rscript -e 'install.packages("foreach")'
RUN Rscript -e 'install.packages("doParallel")'
But when I run docker-compose -f local.yml up
Some errors occurred
r_local | Fatal error: you must specify '--save', '--no-save' or '--vanilla'
r_local exited with code 2
I googled it and found some discussion
But it mentioned Rserve. I don't think I used it.
So I tried another way that installed R in the django container directly.
sudo docker-compose -f local.yml run --rm django apt-get update
sudo docker-compose -f local.yml run --rm django apt install r-base-core -y
sudo docker-compose -f local.yml run --rm django apt-get install r-cran-quantre\
g
sudo docker-compose -f local.yml run --rm django apt-get install r-cran-sparsem
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
dplyr")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
xgboost")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
TSrepr")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
ggplot2")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
foreach")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
doParallel")'
Then, I got
E: Unable to locate package r-base-core
E: Unable to locate package r-cran-quantreg
E: Unable to locate package r-cran-sparsem
/entrypoint: line 45: exec: Rscript: not found
/entrypoint: line 45: exec: Rscript: not found
...
(many lines are ommited for clarity)
Is there any simple way to call Rscript in the django docker's enviroment? Many thanks.
So I am following this tutorial and have gotten all the way to the 'media' section and when I run the command:
docker-compose exec web python manage.py startapp upload
it all works fine but when I open the newly created views.py file and edit and try to save I get a permission denied error. I can open the file as root and edit it but now thru my Atom code editor. I don't know where I am going wrong can someone help me? Here's my code:
Dockerfile:
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
# copy project
COPY . .
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
docker-compose.yml:
services:
web:
build: ./app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_django
- POSTGRES_PASSWORD=hello_django
- POSTGRES_DB=hello_django_dev
volumes:
postgres_data:
entrypoint.sh:
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
# python manage.py flush --no-input
# python manage.py migrate
exec "$#"
try to issue chmod 777 -R in the folder where it is located.
I am using docker-compose to run 3 containers:
django + gunicorn, nginx, and postgresQL.
Every time I change my python .py code, I will run docker-compose restart web but it takes a long time to restart.
I try to restart gunicorn with
`docker-compose exec web ps aux |grep gunicorn | awk '{ print $2 }' |xargs kill -HUP`
But it didn't work.
How can I reload .py code in a shorter time?
I know that gunicorn can be set to hot reload python code. Can I do it manually with a command?
My docker-compose.yml:
version: '3'
services:
db:
build: ./db/
volumes:
- dbdata:/var/lib/postgresql/data/
environment:
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
web:
build: .
command: >
sh -c "gunicorn abc.wsgi -b 0.0.0.0:8000"
# sh -c "python manage.py collectstatic --noinput &&
# python manage.py loaddata app/fixtures/masterData.json &&
# gunicorn abc.wsgi -b 0.0.0.0:8000"
volumes:
- .:/var/www/django
- ./static:/static/
expose:
- "8000"
environment:
- USE_S3
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_STORAGE_BUCKET_NAME
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- SENDGRID_API_KEY
- SECRET_KEY
depends_on:
- db
nginx:
restart: always
build: ./nginx/
volumes:
- ./static:/static/
ports:
- "8000:80"
links:
- web
backup:
image: prodrigestivill/postgres-backup-local:11-alpine
restart: always
volumes:
- /var/opt/pgbackups:/backups
links:
- db
depends_on:
- db
environment:
- POSTGRES_HOST
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- SCHEDULE
- BACKUP_KEEP_DAYS
- BACKUP_KEEP_WEEKS
- BACKUP_KEEP_MONTHS
- HEALTHCHECK_PORT
volumes:
dbdata:
Dockerfile - web:
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
ENV WKHTML2PDF_VERSION 0.12.4
# 0.12.5 wget not work
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /var/www
RUN mkdir /var/www/django
WORKDIR /var/www/django
ADD requirements.txt /var/www/django/
RUN apt-get update && apt-get install -y \
libpq-dev \
python-dev \
gcc \
openssl \
build-essential \
xorg \
libssl1.0-dev \
wget
RUN apt-get install -y sudo
RUN pip install --upgrade pip
RUN pip install -r requirements.txt && pip3 install requests && pip3 install pdfkit
# & pip3 install sendgrid-django
ADD . /var/www/django/
WORKDIR /var/www
RUN wget "https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/${WKHTML2PDF_VERSION}/wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
RUN tar -xJf "wkhtmltox-${WKHTML2PDF_VERSION}_linux-generic-amd64.tar.xz"
WORKDIR wkhtmltox
RUN sudo chown root:root bin/wkhtmltopdf
RUN sudo cp -r * /usr/
WORKDIR /var/www/django
Dockerfile - nginx:
FROM nginx
# Copy configuration files to the container
COPY default.conf /etc/nginx/conf.d/default.conf
After a while, I found that this line is having a problem:
gunicorn abc.wsgi -b 0.0.0.0:8000
This line will put gunicorn running a subprocess. When I tried to submit a HUP signal to gunicorn, only the master process can receive the signal. So gunicorn was not killed. The code cannot be reloaded.
What I do is adding "exec" before it and Gunicorn will be run as the master process. And I can use "docker-compose kill -s HUP web" to gracefully restart gunicorn and my code will be reloaded in the container