Unable to install r package inside django docker container - python

I'm developing a web service by cookie-cutter django
For some reason, I have to call R-script to response user requests. So, at first, I tried to add a R's Dockerfile into local.yml. (in the last section)
version: '3'
volumes:
local_postgres_data: {}
local_postgres_data_backups: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/local/django/Dockerfile
image: webservice_local_django
container_name: django
depends_on:
- postgres
- mailhog
volumes:
- .:/app:z
env_file:
- ./.envs/.local/.django
- ./.envs/.local/.postgres
ports:
- "8000:8000"
command: /start
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: webservice_production_postgres
container_name: postgres
volumes:
- local_postgres_data:/var/lib/postgresql/data:Z
- local_postgres_data_backups:/backups:z
env_file:
- ./.envs/.local/.postgres
docs:
image: webservice_local_docs
container_name: docs
build:
context: .
dockerfile: ./compose/local/docs/Dockerfile
env_file:
- ./.envs/.local/.django
volumes:
- ./docs:/docs:z
- ./config:/app/config:z
- ./webservice:/app/webservice:z
ports:
- "7000:7000"
mailhog:
image: mailhog/mailhog:v1.0.0
container_name: mailhog
ports:
- "8025:8025"
redis:
image: redis:5.0
container_name: redis
celeryworker:
<<: *django
image: webservice_local_celeryworker
container_name: celeryworker
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celeryworker
celerybeat:
<<: *django
image: webservice_local_celerybeat
container_name: celerybeat
depends_on:
- redis
- postgres
- mailhog
ports: []
command: /start-celerybeat
flower:
<<: *django
image: webservice_local_flower
container_name: flower
ports:
- "5555:5555"
command: /start-flower
R:
image: r_local
container_name: r_local
build:
context: .
dockerfile: ./compose/local/R/Dockerfile
And here is the ./compose/local/R/Dockerfile
FROM r-base
WORKDIR /app
ADD . /app
RUN Rscript -e 'install.packages("dplyr")'
RUN Rscript -e 'install.packages("xgboost")'
RUN Rscript -e 'install.packages("TSrepr")'
RUN Rscript -e 'install.packages("ggplot2")'
RUN Rscript -e 'install.packages("foreach")'
RUN Rscript -e 'install.packages("doParallel")'
But when I run docker-compose -f local.yml up
Some errors occurred
r_local | Fatal error: you must specify '--save', '--no-save' or '--vanilla'
r_local exited with code 2
I googled it and found some discussion
But it mentioned Rserve. I don't think I used it.
So I tried another way that installed R in the django container directly.
sudo docker-compose -f local.yml run --rm django apt-get update
sudo docker-compose -f local.yml run --rm django apt install r-base-core -y
sudo docker-compose -f local.yml run --rm django apt-get install r-cran-quantre\
g
sudo docker-compose -f local.yml run --rm django apt-get install r-cran-sparsem
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
dplyr")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
xgboost")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
TSrepr")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
ggplot2")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
foreach")'
sudo docker-compose -f local.yml run --rm django Rscript -e 'install.packages("\
doParallel")'
Then, I got
E: Unable to locate package r-base-core
E: Unable to locate package r-cran-quantreg
E: Unable to locate package r-cran-sparsem
/entrypoint: line 45: exec: Rscript: not found
/entrypoint: line 45: exec: Rscript: not found
...
(many lines are ommited for clarity)
Is there any simple way to call Rscript in the django docker's enviroment? Many thanks.

Related

How to run CLI app on docker for localstack?

I am working on a CLI App in python for AWS SQS(which is run on localstack) on docker. Here's my docker-compose.yml:
version: "3.8"
networks:
localstack-net:
name: localstack-net
driver: bridge
services:
localstack:
image: localstack/localstack
privileged: true
networks:
- localstack-net
ports:
- "4576:4576"
environment:
- DEBUG=1
- EDGE_PORT=4576
- DATA_DIR=/tmp/localstack/data
- SERVICES=sqs:4567
volumes:
- ./.temp/localstack:/tmp/localstack
- ./localstack_setup:/docker-entrypoint-initaws.d/
cli_app:
build:
dockerfile: Dockerfile
container_name: my_app
and here's my dockerfile:
FROM python:3.8-slim
RUN useradd --create-home --shell /bin/bash app_user
WORKDIR /home/app_user
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
USER app_user
COPY . .
CMD ["bash"]
The problem that occurs is that the service cli_app exits when I run the command docker-compose up.
What can I do to rectify this problem?

Docker container keeps on restarting again and again in docker-compose but not when runs isolated

I'm trying to run a python program that uses MongoDB and I want to deploy it on a server, that's because I write a docker-compose file. My problem is that when I run the python project isolated with the docker build -t PROJET_NAME . and docker run image commands everything works properly, however when executing docker-compose up -d the python container restarts over and over again. What am I doing wrong?
I just tried to log it but nothing shows up
Here is the Dockerfile
FROM python:3.7
WORKDIR /app
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
ENV PROD=true
COPY . .
# Installing requirements
RUN pip install -r requirements.txt
RUN export PYTHONPATH=$PATHONPATH:`pwd`
CMD ["python3", "foo.py"]
And the docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
MONGODB_DATABASE: db
MONGODB_USERNAME: appuser
MONGODB_PASSWORD: mongopassword
MONGODB_HOSTNAME: mongodb
depends_on:
- mongodb
networks:
- internal
mongodb:
image: mongo
container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: mongodbuser
MONGO_INITDB_ROOT_PASSWORD: mongodbrootpassword
MONGO_INITDB_DATABASE: db
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db
networks:
- internal
networks:
internal:
driver: bridge
volumes:
mongodbdata:
driver: local

Gitlab CI/CD deploy Django Docker container

I have been tring to setup gitlab ci/cd config for a django project which will be deployed as a container.
This is what i have tried:
CI/CD -
image: creatiwww/docker-compose:latest
services:
- docker:dind
stages:
- lint
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
lint:
stage: lint
image: python:3.8
before_script:
- pip install pipenv
- pipenv install --dev
script:
- pipenv run python -m flake8 --exclude=migrations,settings.py backend
allow_failure: false
build:
stage: build
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- echo "IMAGE_APP_TAG=$TAG_LATEST" >> .env
- docker-compose build
- docker-compose push
only:
- master
deploy-to-prod:
stage: deploy
script:
- eval $(ssh-agent -s)
- echo "${ID_RSA}" | tr -d '\r' | ssh-add - > /dev/null
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY_IMAGE
- echo "IMAGE_APP_TAG=$TAG_LATEST" >> .env
- echo "SECRET_KEY=$SECRET_KEY" >> .env
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" down --remove-orphans
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" pull
- docker-compose -H "ssh://$SERVER_USER#$SERVER_IP" up -d
only:
- master
when: manual
The pipeline succeds but while checking the log of container i get following output-
python: can't open file 'manage.py': [Errno 2] No such file or directory
also my image field in docker ps is empty.
Please help
Put the code in Docker_Compose.yml
version: '3.7'
services:
backend:
build: ./project_name
command: sh -c "cd project && python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
depends_on:
- db
network_mode: host
db:
image: postgres:12.0-alpine
network_mode: host
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER='db_user'
- POSTGRES_PASSWORD='db_password'
- POSTGRES_DB='db_name'
volumes:
postgres_data:

Error running migrations in a docker container

I am getting this error when trying to run migrations in my container. I cannot seem to figure out why.
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"alembic\": executable file not found in $PATH": unknown
Dockerfile:
FROM python:3.8.2
WORKDIR /workspace/
COPY . .
RUN pip install pipenv
RUN pipenv install --deploy --ignore-pipfile
#EXPOSE 8000
#CMD ["pipenv", "run", "python", "/workspace/bin/web.py"]
Docker-Compose:
version: '3'
services:
db:
image: postgres:12
ports:
- "5432:5432"
env_file:
- .env.database.local
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- db
redis:
image: "redis:alpine"
web:
build: .
environment:
- PYTHONPATH=/workspace
env_file:
- .env.local
ports:
- "8000:8000"
volumes:
- .:/workspace
depends_on:
- db
- redis
command: "alembic upgrade head && pipenv run python /workspace/bin/web.py"
The command I run when I encounter this problem:
docker-compose run web alembic revision - autogenerate -m "First migration"
I defined in my Dockerfile that all my program will be running in the workspace directory. So it should point to it.
Yes the issue was that I did not add it to my $PATH.
This is what I added inside my docker-compose:
- PATH=/directory/bin:$PATH
docker-compose run web pipenv run alembic revision - autogenerate -m "First migration"
or
change in Dockerfile
RUN pipenv install --deploy --ignore-pipfile --system
and run
docker-compose run web alembic revision - autogenerate -m "First migration"

Run Python Console via docker-compose on Pycharm

I'm having some problems running pycharm with a remote python interpreter via docker-compose. Everything works just great except Python console when I press the run button it just shows the following message:
"Error: Unable to locate container name for service "web" from
docker-compose output"
I really can't understand why it keeps me showing that if my docker-compose.yml provides a web service.
Any help?
EDIT:
docker-compose.yml
version: '2'
volumes:
dados:
driver: local
media:
driver: local
static:
driver: local
services:
beat:
build: Docker/beat
depends_on:
- web
- worker
restart: always
volumes:
- ./src:/app/src
db:
build: Docker/postgres
ports:
- 5433:5432
restart: always
volumes:
- dados:/var/lib/postgresql/data
jupyter:
build: Docker/jupyter
command: jupyter notebook
depends_on:
- web
ports:
- 8888:8888
volumes:
- ./src:/app/src
python:
build:
context: Docker/python
args:
REQUIREMENTS_ENV: 'dev'
image: helpdesk/python:3.6
redis:
image: redis:3.2.6
ports:
- 6379:6379
restart: always
web:
build:
context: .
dockerfile: Docker/web/Dockerfile
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- python
- db
ports:
- 8001:8000
restart: always
volumes:
- ./src:/app/src
worker:
build: Docker/worker
depends_on:
- web
- redis
restart: always
volumes:
- ./src:/app/src
Dockerfile
FROM python:3.6
# Set requirements environment
ARG REQUIREMENTS_ENV
ENV REQUIREMENTS_ENV ${REQUIREMENTS_ENV:-prod}
# Set PYTHONUNBUFFERED so the output is displayed in the Docker log
ENV PYTHONUNBUFFERED=1
# Install apt-transport-https
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
apt-transport-https
# Configure yarn repo
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
# Install APT dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
locales \
openssl \
yarn
# Set locale
RUN locale-gen pt_BR.UTF-8 && \
localedef -i pt_BR -c -f UTF-8 -A /usr/share/locale/locale.alias pt_BR.UTF-8
ENV LANG pt_BR.UTF-8
ENV LANGUAGE pt_BR.UTF-8
ENV LC_ALL pt_BR.UTF-8
# Copy requirements files to the container
RUN mkdir -p /tmp/requirements
COPY requirements/requirements-common.txt \
requirements/requirements-$REQUIREMENTS_ENV.txt \
/tmp/requirements/
# Install requirements
RUN pip install \
-i http://root:test#pypi.defensoria.to.gov.br:4040/root/pypi/+simple/ \
--trusted-host pypi.defensoria.to.gov.br \
-r /tmp/requirements/requirements-$REQUIREMENTS_ENV.txt
# Remove requirements temp folder
RUN rm -rf /tmp/requirements
This is the python image Dockerfile, the web Dockerfile just declares from this image and copies the source folder to the container.
I think that this is an dependency chain problem, web depends on python so, when the python container gets up, web one still not exists. That may cause the error.
Cheers
Installing required libraries via command line and running the python interpreter from the PATH should suffice.
You can also refer to the JetBrains manual, as to how they have configured for the interpreters of their IDEs.

Categories