I am developing a django-react app and using a mongoDB cluster to store data. When I run the app without using docker, I am able to make requests to the database without issue. However, when I run the docker containers (one for my backend and one for my frontend) I run into this error on the backend:
File "/usr/local/lib/python3.9/site-packages/pymongo/topology.py", line 215, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 5f9ece0f7962ee81cb819b63, topology_type: Single, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017: [Errno 111] Connection refused')>]>
I have the mongodb host in both mongo_client.py and settings.py. In settings.py I have:
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': '<mydb>',
'HOST': 'mongodb+srv://mike:<mypassword>#cluster0.5u0xf.mongodb.net/<mydb>?retryWrites=true&w=majority',
'USER': 'mike',
'PASSWORD': '<mypassword>',
}
}
My docker-compose yaml looks like:
version: "3.2"
services:
portalbackend:
restart: always
container_name: code
command: bash -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
build:
context: ./PortalBackend/
dockerfile: Dockerfile
ports:
- "8000:8000"
networks:
- db-net
portal:
restart: always
command : npm start
container_name: front
build:
context: ./portal/
dockerfile: Dockerfile
ports:
- "3000:3000"
stdin_open: true
depends_on:
- portalbackend
networks:
- db-net
networks:
db-net:
driver: bridge
Do I need to create a container for mongodb? I originally tried that with a local mongodb instance but I was running into the same issue, so I tried rolling with a cluster. Still running into the same problem.
No you don't need to add a mongo container, as your database is in Atlas.
Please see my answer posted yesterday for a similar problem: Django + Mongo + Docker getting pymongo.errors.ServerSelectionTimeoutError
Related
so I tried to connect my docker app (python-1) into another docker app (postgres). But it giving me this error:
psycopg.OperationalError: connection failed: Connection refused
python-1 | Is the server running on host "localhost" (127.0.0.1) and accepting
python-1 | TCP/IP connections on port 25432?
I've tried using condition: service_healthy but it doesn't work. In fact, I already make sure my database is running before python-1 is trying to connect. But the problem seems not about the database hasn't turned on yet. I already use 0.0.0.0 or postgres container's IP using postgres on the host and it also doesn't work.
Here is my docker-compose.yml
version: "3.8"
services:
postgres:
image: postgres:14.6
ports:
- 25432:5432
healthcheck:
test: ["CMD-SHELL", "PGPASSWORD=${DB_PASSWORD}", "pg_isready", "-U", "${DB_USERNAME}", "-d", "${DB_NAME}"]
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
environment:
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
python:
build:
context: .
dockerfile: Dockerfile
depends_on:
postgres:
condition: service_healthy
command: flask --app app init-db && flask --app app run -h 0.0.0.0 -p ${PORT}
ports:
- ${PORT}:${PORT}
environment:
DB_HOST: localhost
DB_PORT: 25432
DB_NAME: ${DB_NAME}
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
And this is my Dockerfile:
# syntax=docker/dockerfile:1
FROM python:3.10
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
In a container, localhost means the container itself.
Different containers on a docker network can communicate using the service names as host names. Also, on the docker network, you use the unmapped port numbers.
So change your environment variables to
environment:
DB_HOST: postgres
DB_PORT: 5432
and you should be able to connect.
I just want some help here, I'm kinda stuck here in Docker and can't find a way out. First, I'm using Windows for a Django APP and Docker
I'm using PgAdmin4 with PostgreSQL 14 and created a new server for docker
The log for the Postgres Image:
2022-07-16 19:39:23.655 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-07-16 19:39:23.673 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-07-16 19:39:23.673 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-07-16 19:39:23.716 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-07-16 19:39:23.854 UTC [26] LOG: database system was shut down at 2022-07-16 16:50:47 UTC
2022-07-16 19:39:23.952 UTC [1] LOG: database system is ready to accept connections
PostgreSQL Database directory appears to contain a database; Skipping initialization
Log from my image: (you can see that doesn't have migrations)
0 static files copied to '/app/static', 9704 unmodified.
Operations to perform:
Apply all migrations: admin, auth, contenttypes, controle, sessions
Running migrations:
No migrations to apply.
Performing system checks...
System check identified no issues (0 silenced).
July 16, 2022 - 16:40:38
Django version 4.0.6, using settings 'setup.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
My docker-compose (updated):
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER = ${POSTGRES_USER}
- POSTGRES_PASSWORD = ${POSTGRES_PASSWORD}
ports:
- "5432:5432"
web:
build: .
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
links:
- db
environment:
- POSTGRES_NAME=${POSTGRES_NAME:-djangodb}
- POSTGRES_USER=${POSTGRES_USER:-postgre}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgre}
networks:
- django_net
networks:
django_net:
driver: bridge
And my .env file (updated):
SECRET_KEY='django-insecure-1l2oh_bda$#s0w%d!#qyq8-09sn*8)6u-^wb(hx03==(vjk16h'
POSTGRES_NAME=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=mypass
POSTGRES_DB=mydb
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
So, analyzing the logs from Postgres last line, he found my local DB (is that right ?) and didn't initialize, but my superuser is gone and so my data.
Is there something that I'm missing ? Maybe it's like that, and I don't know... Just to be sure, I printed some lines from PGAdmin and the APP Screen
DB:
My APP:
My settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('POSTGRES_NAME'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
'HOST': 'db',
'PORT': 5432,
}
}
If I correct understood your question, you can't connect to created database.
If you want to connect to your containerized docker database from outside, you should define ports parameter in your db service in docker-compose file.
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- "5432:5432"
I hope I correct understood your question about you can't connect to new database and I hope my answer will help you.
In this setup, I see two things:
You've configured DATABASE_URL to point to host.docker.internal, so your container is calling out of Docker space, to whatever's listening on port 5432.
In your Compose file, the db container does not have ports:, so you're not connecting to the database your Compose setup starts.
This implies to me that you're running another copy of the PostgreSQL server on your host, and your application is actually connecting to that. (Maybe you're on a MacOS host, and you installed it via Homebrew?)
You don't need to do any special setup to connect between containers; just use the Compose service name db as a host name. You in particular do not need the special host.docker.internal name here. (You can also delete the networks: from the file, so long as you delete all of them; Compose creates a network named default for you and attaches containers to it automatically.) I might configure this in the Compose file, overriding the .env file
version: '3.8'
services:
db: { ... }
web:
environment:
- DATABASE_URL=postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)#db/$(POSTGRES_DB)
I hope my answer to help you solve the problem. Please change the config as follow:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB:-djangodb}
- POSTGRES_USER = ${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD = ${POSTGRES_PASSWORD:-changeme}
ports:
- "5432:5432"
web:
build: .
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
links:
- db
environment:
- POSTGRES_NAME=${POSTGRES_NAME:-djangodb}
- POSTGRES_USER=${POSTGRES_USER:-postgre}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgre}
networks:
- django_net
networks:
django_net:
driver: bridge
I'm trying to do a simple project in Django using the Neo4j database. I've installed a django-neomodel library, set the settings as follows:
import os
from neomodel import config
db_username = os.environ.get('NEO4J_USERNAME')
db_password = os.environ.get('NEO4J_PASSWORD')
config.DATABASE_URL = f'bolt://{db_username}:{db_password}#localhost:7687'
created a model:
class Task(StructuredNode):
id = UniqueIdProperty()
title = StringProperty()
added 'django_neomodel' to INSTALLED_APPS, removed the default database configuration and when I try to enter the website it raises the error: ImproperlyConfigured at / settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details..
It's the only error because after running the python manage.py install_labels command it raises: ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error)) neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address).
I'm pretty sure that the database works correctly because as you see I can access this. screenshot
docker-compose:
version: "3.9"
services:
api:
container_name: mm_backend
build:
context: ./
dockerfile: Dockerfile.dev
command: pipenv run python manage.py runserver 0.0.0.0:8000
volumes:
- ./:/usr/src/mm_backend
ports:
- 8000:8000
env_file: .env
depends_on:
- db
db:
container_name: mm_db
image: neo4j:4.1
restart: unless-stopped
ports:
- "7474:7474"
- "7687:7687"
volumes:
- ./db/data:/data
- ./db/logs:/logs
Well, after some research I've found this post Docker-compose: db connection from web container to neo4j container using bolt and the problem has been solved.
I am starting a Flask project and I want to run unit tests in PyCharm using Docker remote interpreter, but I am not being able to connect to the mysql database container when running the tests. The application runs normally, so the database is reachable outside the container. In the past I managed to do that in PhpStorm, but the configurations in PyCharm are not the same and I am having some trouble setting everything up. I already managed to use the remote interpreter to run tests, but the only trouble is when I need to connect to the database.
I am getting the following error when trying to connect:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306' (111 Connection refused)
So, the server is reachable, but for whatever reason it is not allowing to connect.
Here is the docker-compose.yml
version: "2"
networks:
learning_flask:
name: learning_flask
driver: bridge
services:
mysql:
image: mysql:5.7
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "127.0.0.1:3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./db:/docker-entrypoint-initdb.d/:ro
networks:
- learning_flask
app:
build: ./app
container_name: learning_flask_app
ports:
- "5000:5000"
volumes:
[ './app:/app' ]
depends_on:
- mysql
networks:
- learning_flask
and then the code I am trying to execute:
import unittest
import mysql.connector
class TestCase(unittest.TestCase):
def test_something(self):
config = {
'user': 'root',
'password': 'root',
'host': 'localhost',
'port': '3306'
}
connection = mysql.connector.connect(**config)
if __name__ == '__main__':
unittest.main()
If I try to change the host on the connection config to mysql, I get the following error:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'mysql:3306' (-2 Name or service not known)
I'm new to Docker, and I'm trying to put my Django rest API in a container with Nginx, Gunicorn and Postgres, using docker-compose and docker-machine. Following this tutorial: https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/
Most of my code is the same as the tutorial's (https://github.com/realpython/dockerizing-django). with some minor name changes.
this my docker-compose.yml (I changed the gunicorn command to runserver for debugging purposes)
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- /usr/src/app
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/python manage.py runserver
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
And this is in my settings.py of Django:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': 'postgres',
'PORT': '5432',
}
}
Nginx and postgres (and redis) are up and running, however my django server wont start, by this error:
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 5432?
web_1 | could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 5432?
I've googled a lot and I've verified that postgres is running, on port 5432, I can connect to it using psql command .
I am lost. What is my mistake?
EDIT: It appears that it is not using my settings.py file or something, since it's asking if the server is running on localhost, while settings should be looking for postgres.
When those who have this problem please check your settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'dbname',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'db'
}
}
Your HOST:'db' and docker-compose file db name should be same. If you want to rename from db, make sure that you change in docker-compose file and setting.py:
db:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
Checkout your manage.py,
there should be a line
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
if there is no such line, put it
set your DJANGO_SETTINGS_MODULE with respect to PYTHONPATH.
UPD i cloned your repo and launched the web service by changing command in docker-compose.yml
- command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
+ command: python manage.py runserver 0.0.0.0:8000
I'm sure DJANGO_SETTINGS_MODULE is correct.
I'm exactly facing the same issue, while runing my django app with docker on aws ec2 instance.
I noticed that this error only happend for the first time the docker image is build, so to fix i juste ran :
CTRL + C then docker-compose up again and everything worked fine.