Error Trying to run docker-compose with flask application and Mysql database - python

I'm getting this error when running docker-compose up and I don't know why, tried researching it but all the solutions that I found didn't work. If any knows it would be awesome if you can share it. Thanks!
ERROR
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql' ([Errno -2] Name or service not known)")
This is my docker-compose.yml file. It has the 2 images that it needs to build.
docker-compose.yml
version: "3.7"
services:
web:
build: .
depends_on:
- mysql
ports:
- 5000:5000
links:
- mysql
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: 12345678
MYSQL_DB: flaskmysql
mysql:
image: mysql:5.7
ports:
- "32000:3306"
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 12345678
MYSQL_DATABASE: flaskmysql
volumes:
mysql-data:
This is my Dockerfile that has all the steps to run my application.
Dockerfile
FROM python:3.9-slim-buster
RUN apt-get update && apt-get install -y git python3-dev gcc gfortran libopenblas-dev liblapack-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --upgrade -r requirements.txt
COPY app app/
RUN python app/server.py
EXPOSE 5000
CMD ["python", "app/server.py", "serve"]
Here I've got the lines of code that tries to make a connection to the service that docker-compose created with the image given.
Server.py
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI']= 'mysql+pymysql://root:12345678#mysql:3306/flaskmysql'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS']= False

I believe you are running into a race condition here. Even though you've specified the depends_on dependency for your application, note that docker-compose will not 'wait' for the database to be available before proceeding to the next step. That is because docker-compose doesn't "know" what it means for this service to become "ready".
This mean, as long as the container is 'Running' (can be still initialising the database), docker-compose will move on to try and build the image for your application, and running the application, which attempts to connect with a database not ready. You can do two things here:
Add a waiting loop in your application to attempt retries. RECOMMENDED.
Add a solution like wait-for-it to your docker compose setup.
You can find more details in this docker documentation page on startup order setting.
I suggest add a simple retry loop in your application :)

Related

Dockerized Django app and MySQL with docker-compose using .env

I would to run my Django project into a Docker container with its Database on another Docker container inside a Bebian
When i run my docker container, I have some errors. Like : Lost connection to MySQL server during query ([Errno 104] Connection reset by peer).
This command mysql > SET GLOBAL log_bin_trust_function_creators = 1 is very important because database's Django user create trigger.
Morever, I use a .env file used same for create DB image to store DB user and password. This path is settings/.env.
My code:
docker-compose.yml:
version: '3.3'
services:
db:
image: mysql:8.0.29
container_name: db_mysql_container
environment:
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
command: ["--log_bin_trust_function_creators=1"]
ports:
- '3306:3306'
expose:
- '3306'
api:
build: .
container_name: django_container
command: bash -c "pip install -q -r requirements.txt &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- '8000:8000'
depends_on:
- db
Dockerfile :
# syntax=docker/dockerfile:1
FROM python:3.9.14-buster
ENV PYTHONUNBUFFERED=1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
How to start my Django project ? Is possible to start only the DB container ?
What command i need execute and what changes i need to make, I'm novice with Docker ! So if you help me, please explains your commands and actions !
You can find this project on my GitHub
Thank !
To run dockerized django project.
Simply you can run below command:
docker-compose run projectname bash -c "python manage.py createsuperuser"
Above command is used for to create superuser

Running Django's collectstatic in Dockerfile produces empty directory

I'm trying to run Django from a Docker container on Heroku, but to make that work, I need to run python manage.py collectstatic during my build phase. To achieve that, I wrote the following Dockerfile:
# Set up image
FROM python:3.10
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install poetry and identify Python dependencies
RUN pip install poetry
COPY pyproject.toml /usr/src/app/
# Install Python dependencies
RUN set -x \
&& apt update -y \
&& apt install -y \
libpq-dev \
gcc \
&& poetry config virtualenvs.create false \
&& poetry install --no-ansi
# Copy source into image
COPY . /usr/src/app/
# Collect static files
RUN python -m manage collectstatic -v 3 --no-input
And here's the docker-compose.yml file I used to run the image:
services:
db:
image: postgres
env_file:
- .env.docker.db
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
ports:
- "5433:5432"
web:
build: .
restart: always
env_file:
- .env.docker.web
ports:
- "8001:$PORT"
volumes:
- .:/usr/src/app
depends_on:
- db
networks:
- backend
command: gunicorn --bind 0.0.0.0:$PORT myapp.wsgi
volumes:
db:
networks:
backend:
driver: bridge
The Dockerfile builds just fine, and I can even see that collectstatic is running and collecting the appropriate files during the build. However, when the build is finished, the only evidence that collectstatic ran is an empty directory called staticfiles. If I run collectstatic again inside of my container, collectstatic works just fine, but since Heroku doesn't persist files created after the build stage, they disappear when my app restarts.
I found a few SO answers discussing how to get collectstatic to run inside a Dockerfile, but that's not my problem; my problem is that it does run, but the collected files don't show up in the container. Anyone have a clue what's going on?
UPDATE: This answer did the trick. My docker-compose.yml was overriding the changes made by collectstatic with this line:
volumes:
- .:/usr/src/app
If, like me, you want to keep the bind mount for ease of local development (so that you don't need to re-build each time), you can edit the command for the web service as follows:
command: bash -c "python -m manage collectstatic && gunicorn --bind 0.0.0.0:$PORT myapp.wsgi"
Note that the image would have run just fine as-is had I pushed it to Heroku (since Heroku doesn't use the docker-compose.yml file), so this was just a problem affecting containers I created on my local machine.
You are overriding the content of /usr/src/app in your container when you added the
volumes:
- .:/usr/src/app
to your docker compose file.
Remove it since you already copied everything during the build.

Postgres in Docker/ Django app does not work - OperationalError: could not connect to server: Connection refused

I have a Django app that runs locally with no problems.
Now, I want to get a docker image of my app but when I try to build it, it gives me the following error:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
I am new on Django and Docker development and i was looking for similar questions but none answer solved my problem.
I'll show you my Dockerfile, I have copied the Postgres commands from another project that does work:
FROM python:3.8
RUN apt-get update
RUN mkdir /project
WORKDIR /project
RUN apt-get install -y vim
COPY requirements.txt /project/
RUN pip install -r requirements.txt
COPY . /project/
# Install postgresql
RUN apt install -y postgresql postgresql-contrib
RUN service postgresql start
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Create a PostgreSQL role
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER admin WITH SUPERUSER PASSWORD 'passwd';" &&\
createdb -O admin plataforma
USER root
# setup postgresql
RUN sed -i "/^#listen_addresses/i listen_addresses='*'" /etc/postgresql/13/main/postgresql.conf
RUN sed -i "/^# DO NOT DISABLE\!/i # Allow access from any IP address" /etc/postgresql/13/main/pg_hba.conf
RUN sed -i "/^# DO NOT DISABLE\!/i host all all 0.0.0.0/0 md5\n\n\n" /etc/postgresql/13/main/pg_hba.conf
# running commands of my app
RUN python manage.py makemigrations accounts
RUN python manage.py sqlmigrate accounts 0001
RUN python manage.py migrate
RUN python manage.py inicializar
# Expose some ports
EXPOSE 22 5432 8080 8009 8000
# volumes
VOLUME ["/var/lib/postgresql/12/main"]
# Default command
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here is my requirements.txt file:
#requirements.txt
Django==3.2.8
djangorestframework==3.9.1
gunicorn==19.9.0
pandas==1.3.3
path==16.2.0
six==1.14.0
Pillow==7.0.0
psycopg2>=2.8
django-environ==0.8.1
Here is my settings.py file (the databases fragment):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': env('POSTGRESQL_NAME'),
'USER': env('POSTGRESQL_USER'),
'PASSWORD': env('POSTGRESQL_PASS'),
'HOST': env('POSTGRESQL_HOST'),
'PORT': env('POSTGRESQL_PORT'),
}
}
And here is my .env file:
POSTGRESQL_NAME=plataforma
POSTGRESQL_USER=admin
POSTGRESQL_PASS=passwd
POSTGRESQL_HOST=localhost
POSTGRESQL_PORT=5432
DEBUG=True
Some people told me that I could make a docker-compose file so I erased the postgres install commands from Dockerfile and made this:
version: "3.3"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=plataforma
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=passwd
depends_on:
- db
But this doesn't work either.
Footnote: If I use the Django's default database (SQLite) and (obviously) erase postgress commands from Dockerfile, I can build the Docker image without problems, and, again, this app with postgres works good if I run it locally. So, something is happening with Docker + postgres and I don't know what to do.
Somebody can help me? Thank you!
Edit: I erased the migrations commands from Dockerfile and replaced environ var of POSTGRESQL_HOST to db and when I run $ sudo docker-compose run web python manage.py runserver . the image is created but the container not, and when I try to run a container with that image i get the following error:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
Sorry my english, I still learning it too.
EDIT AGAIN: I finally solved this issue in this question
Thanks for the help!
While creating a superuser your command is
CREATE USER admin WITH SUPERUSER PASSWORD 'administrador'
And in your env file you are using passwd as password.
Change your env from
POSTGRESQL_PASS=passwd
To this
POSTGRESQL_PASS=administrador

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

docker-compose up --build, get stuck while installing the pip package in alpine container

Installing the package in alpine get stuck
it stuck at
(6/12) Installing ncurses-terminfo (6.1_p20190105-r0) OR
(10/12) Installing python2 (2.7.16-r1)
Sometimes it works properly.
Command: sudo docker-compose build
Tried proxy but didn't worked
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/admin/systemd/
#
# Customize location of Docker binary (especially for development testing).
#DOCKERD="/usr/local/bin/dockerd"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export DOCKER_TMPDIR="/mnt/bigdrive/docker-tmp"
Also tried by increating the MTU
docker-compose.yml
version: '3.7'
services:
admin-api:
container_name: admin-api
build:
context: .
dockerfile: Dockerfile
environment:
- HOME=/home
- NODE_ENV=dev
- DB_1=mongodb://mongo:27017/DB_1
- DB_2=mongodb://mongo:27017/DB_2
volumes:
- '.:/app'
- '/app/node_modules'
- '$HOME/.aws:/home/.aws'
ports:
- '4004:4004'
networks:
- backend
links:
- mongo
mongo:
container_name: mongo
image: mongo:4.2.0-bionic
ports:
- "27018:27017"
networks:
- backend
networks:
backend:
driver: bridge
driver_opts:
com.docker.network.driver.mtu: 1500
Dockerfile
# base image
FROM node:8.16.1-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN apk add --update-cache py-pip && \
pip install awscli && \
apk --purge -v del py-pip && \
rm -rf /var/cache/apk/*
RUN npm install --silent
RUN npm install -g nodemon
# start app
CMD nodemon
EXPOSE 4004
My work is dependent on AWS and it requires AWS credentials, I installed the AWS using pip and mounted the /home/.aws (local) with /home/.aws container but when I am creating or building the container it gets stuck and doesn't show any error. While building the container, I also checked the network monitor, it shows receiving packets 0 bytes/s
Tried --verbose but it didn't get any useful information

Categories