docker-compose up --build, get stuck while installing the pip package in alpine container - python

Installing the package in alpine get stuck
it stuck at
(6/12) Installing ncurses-terminfo (6.1_p20190105-r0) OR
(10/12) Installing python2 (2.7.16-r1)
Sometimes it works properly.
Command: sudo docker-compose build
Tried proxy but didn't worked
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/admin/systemd/
#
# Customize location of Docker binary (especially for development testing).
#DOCKERD="/usr/local/bin/dockerd"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export DOCKER_TMPDIR="/mnt/bigdrive/docker-tmp"
Also tried by increating the MTU
docker-compose.yml
version: '3.7'
services:
admin-api:
container_name: admin-api
build:
context: .
dockerfile: Dockerfile
environment:
- HOME=/home
- NODE_ENV=dev
- DB_1=mongodb://mongo:27017/DB_1
- DB_2=mongodb://mongo:27017/DB_2
volumes:
- '.:/app'
- '/app/node_modules'
- '$HOME/.aws:/home/.aws'
ports:
- '4004:4004'
networks:
- backend
links:
- mongo
mongo:
container_name: mongo
image: mongo:4.2.0-bionic
ports:
- "27018:27017"
networks:
- backend
networks:
backend:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1500
Dockerfile
# base image
FROM node:8.16.1-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN apk add --update-cache py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm -rf /var/cache/apk/*
RUN npm install --silent
RUN npm install -g nodemon
# start app
CMD nodemon
EXPOSE 4004
My work is dependent on AWS and it requires AWS credentials, I installed the AWS using pip and mounted the /home/.aws (local) with /home/.aws container but when I am creating or building the container it gets stuck and doesn't show any error. While building the container, I also checked the network monitor, it shows receiving packets 0 bytes/s
Tried --verbose but it didn't get any useful information

Related

How to setup psycopg2 in a docker container running on a droplet?

I'm trying to wrap a scraping project in a Docker container to run it on a droplet. The spider scraps a website and then writes the data to a postgres database. The postgres database is already running and managed by Digitalocean.
When I run the command locally to test, everything is fine:
docker compose up
I can visualize the spider writing on the database.
Then, I use github action to build and push my docker image on a registry each time I push the code with the script:
name: CI
# 1
# Controls when the workflow will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
version:
description: 'Image version'
required: true
#2
env:
REGISTRY: "registry.digitalocean.com/*****-registery"
IMAGE_NAME: "******-scraper"
POSTGRES_USERNAME: ${{ secrets.POSTGRES_USERNAME }}
POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}
POSTGRES_HOSTNAME: ${{ secrets.POSTGRES_HOSTNAME }}
POSTGRES_PORT: ${{ secrets.POSTGRES_PORT }}
POSTGRES_DATABASE: ${{ secrets.POSTGRES_DATABASE }}
SPLASH_URL: ${{ secrets.SPLASH_URL }}
#3
jobs:
build-compose:
name: Build docker-compose
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Insall doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Login to DO Container Registry with short-lived creds
run: doctl registry login --expiry-seconds 1200
- name: Remove all old images
run: if [ ! -z "$(doctl registry repository list | grep "****-scraper")" ]; then doctl registry repository delete-manifest ****-scraper $(doctl registry repository list-tags ****-scraper | grep -o "sha.*") --force; else echo "No repository"; fi
- name: Build compose
run: docker compose -f docker-compose.yaml up -d
- name: Push to Digital Ocean registery
run: docker compose push
deploy:
name: Deploy from registery to droplet
runs-on: ubuntu-latest
needs: build-compose
Then I ssh root#ipv4 manually to my droplet in order to install docker, docker compose and run the image from the registry with:
# Login to registry
docker login -u DO_TOKEN -p DO_TOKEN registry.digitalocean.com
# Stop running container
docker stop ****-scraper
# Remove old container
docker rm ****-scraper
# Run a new container from a new image
docker run -d --restart always --name ****-scraper registry.digitalocean.com/****-registery/****-scraper
As soon as the python script starts on the droplet I have the error:
psycopg2.OperationalError: could not connect to server: No such file
or directory Is the server running locally and accepting connections
on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
It seems like I'm doing something wrong and I can't find how to fix this so far.
I would appreciate some help explanations.
Thanks,
My Dockerfile:
# As Scrapy runs on Python, I run the official Python 3 Docker image.
FROM python:3.9.7-slim
# Set the working directory to /usr/src/app.
WORKDIR /usr/src/app
# Install libpq-dev for psycopg2 python package
RUN apt-get update \
&& apt-get -y install libpq-dev gcc
# Copy the file from the local host to the filesystem of the container at the working directory.
COPY requirements.txt ./
# Install Scrapy specified in requirements.txt.
RUN pip3 install --no-cache-dir -r requirements.txt
# Copy the project source code from the local host to the filesystem of the container at the working directory.
COPY . .
# For Slash
EXPOSE 8050
# Run the crawler when the container launches.
CMD [ "python3", "./****/launch_spiders.py" ]
My docker-compose.yaml
version: "3"
services:
splash:
image: scrapinghub/splash
restart: always
command: --maxrss 2048 --max-timeout 3600 --disable-lua-sandbox --verbosity 1
ports:
- "8050:8050"
launch_spiders:
restart: always
build: .
volumes:
- .:/usr/src/app
image: registry.digitalocean.com/****-registery/****-scraper
depends_on:
- splash
Try installing binary packages of psycopg2-binary instead of psycopg2. Then you don't need gcc and libpq-dev. Probably you have mixed versions of postgreSQL.
Problem solved!
The .env file with all my credentials was in the .dockerignore. It was then impossible to locate this .env when building the image.

Error Trying to run docker-compose with flask application and Mysql database

I'm getting this error when running docker-compose up and I don't know why, tried researching it but all the solutions that I found didn't work. If any knows it would be awesome if you can share it. Thanks!
ERROR
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql' ([Errno -2] Name or service not known)")
This is my docker-compose.yml file. It has the 2 images that it needs to build.
docker-compose.yml
version: "3.7"
services:
web:
build: .
depends_on:
- mysql
ports:
- 5000:5000
links:
- mysql
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: 12345678
MYSQL_DB: flaskmysql
mysql:
image: mysql:5.7
ports:
- "32000:3306"
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 12345678
MYSQL_DATABASE: flaskmysql
volumes:
mysql-data:
This is my Dockerfile that has all the steps to run my application.
Dockerfile
FROM python:3.9-slim-buster
RUN apt-get update && apt-get install -y git python3-dev gcc gfortran libopenblas-dev liblapack-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --upgrade -r requirements.txt
COPY app app/
RUN python app/server.py
EXPOSE 5000
CMD ["python", "app/server.py", "serve"]
Here I've got the lines of code that tries to make a connection to the service that docker-compose created with the image given.
Server.py
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI']= 'mysql+pymysql://root:12345678#mysql:3306/flaskmysql'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS']= False
I believe you are running into a race condition here. Even though you've specified the depends_on dependency for your application, note that docker-compose will not 'wait' for the database to be available before proceeding to the next step. That is because docker-compose doesn't "know" what it means for this service to become "ready".
This mean, as long as the container is 'Running' (can be still initialising the database), docker-compose will move on to try and build the image for your application, and running the application, which attempts to connect with a database not ready. You can do two things here:
Add a waiting loop in your application to attempt retries. RECOMMENDED.
Add a solution like wait-for-it to your docker compose setup.
You can find more details in this docker documentation page on startup order setting.
I suggest add a simple retry loop in your application :)

Running Django's collectstatic in Dockerfile produces empty directory

I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile:
# Set up image
FROM python:3.10
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install poetry and identify Python dependencies
RUN pip install poetry
COPY pyproject.toml /usr/src/app/
# Install Python dependencies
RUN set -x \
&& apt update -y \
&& apt install -y \
libpq-dev \
gcc \
&& poetry config virtualenvs.create false \
&& poetry install --no-ansi
# Copy source into image
COPY . /usr/src/app/
# Collect static files
RUN python -m manage collectstatic -v 3 --no-input
And here's the docker-compose.yml file I used to run the image:
services:
db:
image: postgres
env_file:
- .env.docker.db
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
ports:
- "5433:5432"
web:
build: .
restart: always
env_file:
- .env.docker.web
ports:
- "8001:$PORT"
volumes:
- .:/usr/src/app
depends_on:
- db
networks:
- backend
command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi
volumes:
db:
networks:
backend:
driver: bridge
The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts.
I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on?
UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line:
volumes:
- .:/usr/src/app
If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows:
command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi"
Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.
You are overriding the content of /usr/src/app in your container when you added the
volumes:
- .:/usr/src/app
to your docker compose file.
Remove it since you already copied everything during the build.

Compose up container exited with code 0 and logs it with empty

I need to containerize a Django Web project with docker. I divided the project into dashboard, api-server and database. When I type docker-compose up, it print api-server exited with code 0 and api-server container Exited (0), and I type docker logs api-server, it return empty, but other container normal. I don't know how to check problem.
api-server directory structure is as follows
api-server
server/
Dockerfile
requirements.txt
start.sh
...
...
Some compose yml content is as follows
dashboard:
image: nginx:latest
container_name: nginx-dashboard
volumes:
- /nginx/nginx/default:/etc/nginx/conf.d/default.conf:ro
- /nginx/dist:/var/www/html:ro
ports:
- "80:80"
depends_on:
- api-server
api-server:
build: /api-server
container_name: api-server
volumes:
- /api-server:/webapps
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: Postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
ports:
- "5432:5432"
Some Dockerfile content of api-server is as follows
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
RUN apt-get clean && apt-get update && apt-get upgrade -y && apt-get install -y python3-pip libpq-dev apt-utils
COPY ./requirements.txt /webapps/
RUN pip3 install -r /webapps/requirements.txt
COPY . /webapps/
CMD ["bash","-c","./start.sh"]
start.sh is as follows
#!/usr/bin/env bash
cd server/
python manage.py runserver 0.0.0.0:8000
type docker-compose up result as follows
root#VM:/home/test/Documents/ComposeTest# docker-compose up
Creating network "composetest_default" with the default driver
Creating Postgres ... done
Creating api-server ... done
Creating dashboard ... done
Attaching to Postgres, api-server, dashboard
Postgres | The files belonging to this database system will be owned by user "postgres".
Postgres | This user must also own the server process.
...
...
api-server exited with code 0
api-server exited with code 0
docker logs api-server is empty
I would very appreciate it if you guys can tell me how to check this problems, It is better to provide a solution.
You are already copying api-server to Dockerfile during build time which should work fine, but in Docker compose it all override all the pip packages and code.
volumes:
- /api-server:/webapps
Remove the volume from your Docker compose and it should work.
Second thing set permission to the bash script.
COPY . /webapps/
RUN chmod +x ./start.sh
Third thing, you do need to run python using bash as there is no thing in the bash that CMD can not perform so why not as a CMD?
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Auto reloading Django server on Docker

I am learning to use Docker and I have been having a problem since yesterday (before I resorted to asking, I started to investigate but I could not solve the problem), my problem is that I have a Django project in my local machine, I also have the same project with Docker, but when I change my local project, it is not reflected in the container that the project is running. I would be very grateful if you could help me with this please. Thank you.
Dockerfile
FROM python:3.7-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /code
RUN pipenv install --skip-lock --system --dev
COPY ./entrypoint.sh /code
COPY . /code
ENTRYPOINT [ "/code/entrypoint.sh" ]
docker-compose.yml
# version de docker-compose con la que trabajaremos
version: '3'
# definiendo los servicios que correran en nuestro contenedor
services:
web:
restart: always
build: .
command: gunicorn app.wsgi:application --bind 0.0.0.0:8000 #python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- static_volume:/code/staticfiles
- media_volume:/code/mediafiles
expose:
- 8000
environment:
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=postgres
- SQL_PASSWORD=postgres
- SQL_HOST=db
- SQL_PORT=5432
- DATABASE=postgres
depends_on:
- db
env_file: .env
db:
restart: always
image: postgres:10.5-alpine
volumes:
- ./postgres-data:/var/lib/postgresql/data
nginx:
restart: always
build: ./nginx
volumes:
- static_volume:/code/staticfiles
- media_volume:/code/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
static_volume:
media_volume:
And a little doubt here, is it a good practice to store the environment variables in Dockerfile or docker-compose ?, I use .env but I have seen in many places that they store the variables in docker-compose, as shown in the code of above
I hope you can help me, any recommendation about my project, is very well received, as I comment, I'm new to Docker but I really like it a lot and I would like to learn more about it.
How people usually approach this is to have a separate docker-compose configurations for development and production environment, e.g. local.yml and production.yml. That way you can use runserver while developing (which you'll probably find more suitable since you'll get a lot of debug information) and gunicorn on production.
I'd recommend looking into https://github.com/pydanny/cookiecutter-django project which has a lot of Django good practices integrated as well as a good out of the box Docker configuration. You can create a test project using the cookiecutter and then inspect how they do the Docker setup, including environment variables.

Categories