Docker Compose with Django and Postgres Fails with "django.db.utils.OperationalError: could not connect to server:" [duplicate] - python

This question already has an answer here:
Docker-compose: App container can't connect to Postgres
(1 answer)
Closed yesterday.
I am trying to run my django/postgres application with docker compose. When I run docker compose up -d I get the the following logs on my postgres container running on port 5432:
2023-02-18 00:10:25.049 UTC [1] LOG: starting PostgreSQL 13.8 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-02-18 00:10:25.052 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-02-18 00:10:25.054 UTC [22] LOG: database system was shut down at 2023-02-18 00:10:06 UTC
2023-02-18 00:10:25.056 UTC [1] LOG: database system is ready to accept connections
It appears my postgres container is working properly. However my python/django container has the following logs:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- ./xi/.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_DB=***
- POSTGRES_USER=***
- POSTGRES_PASSWORD=***
volumes:
- dev-db-data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
dev-db-data:
Dockerfile:
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install nano python3-dev libpq-dev -y
COPY requirements-prod.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
I must be missing something small that allows the python container to communicate with the postgres container.
Also a few additional questions:
What does it mean that the container is "listening on IPv4 address '0.0.0.0', port 5432"? To my understanding 0.0.0.0 encapsulates all ip addresses including 127.0.0.1. So, in this case that shouldn't be an issue (correct me if I'm wrong)
I have been struggling with this for a few days. I have followed the getting started docs as well as the python usage guides on the docker docs, and it appears that I feel that I understand everything, but I am unable to debug my containers efficiently. What additional supplemental knowledge can I acquire that helps me debug a container with the same level of comfortability as I would a python script?
I tried a few things:
swapping env_file with the credentials hard coded in
changing python with python3
removing sh -c
I tried building my database first with docker-compose up -d --build db and then building my web app with docker-compose up -d --build web and the issue persisted.
I tried everything with the environment variables and it appears improper credentials is not the issue. Running python manage.py runserver without docker it successfully connects to the database. There are some similar stack overflow questions, but I have tried their solutions and they do not work.
Part of my issue is I don't know what to try and how to efficiently debug docker containers yet (hence the question above).

What have you set as your HOST variable in DATABASES['default'] in settings.py? If it's '127.0.0.1', try changing to 'db' to match the container service name.

Related

Can't connect to my Docker Postgres from python suddenly

I've been trying to configure my m1 to work with an older ruby on rails api and I think in the process I've broken my ability to connect any of my python apis to their database images in docker running locally.
When I run:
psql -U dev -h localhost database
Instead of the lovely psql blinking cursor allowing me to run any sql statement I'd like I get this error message instaad:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: database "dev" does not exist
I've tried docker-compuse up and down and force recreating and brew uninstalling postgres and reinstalling postgres via brew. I've downloaded the postgres.app dmg and made sure to change it to a different port hoping that that would trigger the steps needed just for psycopg2 to connect to the docker image.
the docker-compose.yaml looks like this:
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
POSTGRES_USER: dev
POSTGRES_HOST_AUTH_METHOD: trust
networks:
default:
aliases:
- postgres
ports:
- 5432:5432
What am I missing and what can I blame ruby on rails for (which works by the way) 🤣
I think it's just docker configuration you need to update
First of all check your existing services in your local machine if the port is used by any other services (Mostly likely ylocal postgres server).
next step is to change your yaml file as below
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
ports:
- 5434:5432
after that you can connect with following command in your cmd
psql -U postgres -h localhost -p 5434
assuming that you have separate yaml file for your python application
if you merge your python code in same yaml file then your connection string will be your service name (db in your case) and port will be 5432
So the answer is pretty simple. What was happening was that I had a third instance of postgres running on my computer that I had not accounted for which was the brew version. Simply running brew services stop postgres and later brew uninstall postgres fixed all my problems with being able to have my ruby on rails api work which rely on "postgres native" on my mac (protip, I changed this one to use port 5431) and my python api work which use a containerized postgres on port 5432 without any headaches. During some intial confusion during my Ruby on Rails setup which required me getting Ruby 2.6.7 running on an m1 mac I must have installed postgres via brew in an attempt to get something like db:create to work.

Python WebSocket connection refused in Docker

I have a python socketio application that works just fine when run locally. However, when I move it into Docker, external clients are unable to connect and throw this error:
socketio.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The WebSocket server is really basic and publishes to 0.0.0.0:8080. My client apps listen to localhost:8080. In my Docker container, I've exposed port 8080. I'm guessing that I'm setting up my container incorrectly.:
Dockerfile:
FROM python:latest
WORKDIR /path/to/project
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
I start up my container with docker-compose up and I'm using Docker Desktop (if that helps). I use this environment for development so I start my server with python my_server.py. The server starts successfully. What else am I missing?
I've tried the following based on what others have said about this problem online
Explicitly setting 0.0.0.0 to be the host
Using EXPOSE 8080 in my Dockerfile
Set the network mode to host
Your container connectivity appears to be fine.
You have not specified any command to run in your container. In order to ensure your server is actually running on port 8080, you can add a CMD instruction in your Dockerfile or specify cmd in your docker-compose.yaml. This can be done with:
CMD ["python", "my_server.py"]
Or with docker-compose.yaml file as:
version: "3.9"
services:
app:
build: .
working_dir: /path/to/project
stdin_open: true
tty: true
command: python my_server.py #<== this overrides the Dockerfile CMD instruction
ports:
- "8080:8080"
volumes:
- type: bind
source: .
target: /path/to/project
You can learn more about command in docker-compose.yaml here.
That's also the recommended way to develop with docker. Running commands inside running containers for dev work is not the right way. You essentially run docker compose every time you test a new change. That avoids a lot of problems like these where connectivity doesn't work.

Running a django application on both the host machine and docker container at the same time

I have created a simple django application that has one endpoint /health/live and it returns a success message upon receiving a get request.
I run the application locally with python manage.py runserver on port 8000
I also have a docker-compose and Dockerfile as below:
FROM python
ENV PYTHONUNBUFFERED 1
RUN mkdir /inventory
WORKDIR /inventory
COPY . /inventory
WORKDIR /inventory
RUN pip install -r requirements.txt
and
version: '3'
networks:
kong-net:
name: kong-net
driver: bridge
ipam:
config:
- subnet: 172.1.1.0/24
services:
inventory:
container_name: inventory
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.11
ports:
- "8000:8000"
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000
I then run docker-compose up (I don't detach it to be able to see the logs)
They both work. I send a get request to http://127.0.0.1:8000/health/live:
based on the logs I see, the request goes through the service running directly on the system and not on the docker container
If I stop the service running directly without docker, and send the request, the request goes through the one deployed on docker
is there a reason this is happening? why the first one takes priority?
And shouldn't I see an error when trying to run the docker container or start the application locally? because they are both listening to port 8000!

Postgres in Docker/ Django app does not work - OperationalError: could not connect to server: Connection refused

I have a Django app that runs locally with no problems.
Now, I want to get a docker image of my app but when I try to build it, it gives me the following error:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
I am new on Django and Docker development and i was looking for similar questions but none answer solved my problem.
I'll show you my Dockerfile, I have copied the Postgres commands from another project that does work:
FROM python:3.8
RUN apt-get update
RUN mkdir /project
WORKDIR /project
RUN apt-get install -y vim
COPY requirements.txt /project/
RUN pip install -r requirements.txt
COPY . /project/
# Install postgresql
RUN apt install -y postgresql postgresql-contrib
RUN service postgresql start
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Create a PostgreSQL role
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER admin WITH SUPERUSER PASSWORD 'passwd';" &&\
createdb -O admin plataforma
USER root
# setup postgresql
RUN sed -i "/^#listen_addresses/i listen_addresses='*'" /etc/postgresql/13/main/postgresql.conf
RUN sed -i "/^# DO NOT DISABLE\!/i # Allow access from any IP address" /etc/postgresql/13/main/pg_hba.conf
RUN sed -i "/^# DO NOT DISABLE\!/i host all all 0.0.0.0/0 md5\n\n\n" /etc/postgresql/13/main/pg_hba.conf
# running commands of my app
RUN python manage.py makemigrations accounts
RUN python manage.py sqlmigrate accounts 0001
RUN python manage.py migrate
RUN python manage.py inicializar
# Expose some ports
EXPOSE 22 5432 8080 8009 8000
# volumes
VOLUME ["/var/lib/postgresql/12/main"]
# Default command
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here is my requirements.txt file:
#requirements.txt
Django==3.2.8
djangorestframework==3.9.1
gunicorn==19.9.0
pandas==1.3.3
path==16.2.0
six==1.14.0
Pillow==7.0.0
psycopg2>=2.8
django-environ==0.8.1
Here is my settings.py file (the databases fragment):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': env('POSTGRESQL_NAME'),
'USER': env('POSTGRESQL_USER'),
'PASSWORD': env('POSTGRESQL_PASS'),
'HOST': env('POSTGRESQL_HOST'),
'PORT': env('POSTGRESQL_PORT'),
}
}
And here is my .env file:
POSTGRESQL_NAME=plataforma
POSTGRESQL_USER=admin
POSTGRESQL_PASS=passwd
POSTGRESQL_HOST=localhost
POSTGRESQL_PORT=5432
DEBUG=True
Some people told me that I could make a docker-compose file so I erased the postgres install commands from Dockerfile and made this:
version: "3.3"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=plataforma
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=passwd
depends_on:
- db
But this doesn't work either.
Footnote: If I use the Django's default database (SQLite) and (obviously) erase postgress commands from Dockerfile, I can build the Docker image without problems, and, again, this app with postgres works good if I run it locally. So, something is happening with Docker + postgres and I don't know what to do.
Somebody can help me? Thank you!
Edit: I erased the migrations commands from Dockerfile and replaced environ var of POSTGRESQL_HOST to db and when I run $ sudo docker-compose run web python manage.py runserver . the image is created but the container not, and when I try to run a container with that image i get the following error:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
Sorry my english, I still learning it too.
EDIT AGAIN: I finally solved this issue in this question
Thanks for the help!
While creating a superuser your command is
CREATE USER admin WITH SUPERUSER PASSWORD 'administrador'
And in your env file you are using passwd as password.
Change your env from
POSTGRESQL_PASS=passwd
To this
POSTGRESQL_PASS=administrador

Flask and selenium-hub are not communicating when dockerised

I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address

Categories