Errors during configuration of the neo4j django-neomodel - python

I'm trying to do a simple project in Django using the Neo4j database. I've installed a django-neomodel library, set the settings as follows:
import os
from neomodel import config
db_username = os.environ.get('NEO4J_USERNAME')
db_password = os.environ.get('NEO4J_PASSWORD')
config.DATABASE_URL = f'bolt://{db_username}:{db_password}#localhost:7687'
created a model:
class Task(StructuredNode):
id = UniqueIdProperty()
title = StringProperty()
added 'django_neomodel' to INSTALLED_APPS, removed the default database configuration and when I try to enter the website it raises the error: ImproperlyConfigured at / settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details..
It's the only error because after running the python manage.py install_labels command it raises: ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error)) neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address).
I'm pretty sure that the database works correctly because as you see I can access this. screenshot
docker-compose:
version: "3.9"
services:
api:
container_name: mm_backend
build:
context: ./
dockerfile: Dockerfile.dev
command: pipenv run python manage.py runserver 0.0.0.0:8000
volumes:
- ./:/usr/src/mm_backend
ports:
- 8000:8000
env_file: .env
depends_on:
- db
db:
container_name: mm_db
image: neo4j:4.1
restart: unless-stopped
ports:
- "7474:7474"
- "7687:7687"
volumes:
- ./db/data:/data
- ./db/logs:/logs

Well, after some research I've found this post Docker-compose: db connection from web container to neo4j container using bolt and the problem has been solved.

Related

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: nodename nor servname provided

I am following a youtube tutorial to create an API using FastAPI. I am new to both Docker and FastAPI. I have managed to create a Docker container for the API and another one for the Postgres database using this Dockerfile and docker-compose:
Dockerfile:
FROM python:3.9.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
docker-compose:
version: '3'
services:
api:
build: .
depends_on:
- postgres
ports:
- 80:8000
volumes:
- ./:/usr/src/app.ro
env_file:
- ./.env
command: bash -c "uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload"
postgres:
image: postgres
ports:
- 5432:5432
env_file:
- ./.env
volumes:
- postgres-db:/var/lib/posgresql/data
volumes:
postgres-db:
I am also using Alembic, so after setting up both containers, I run
docker-compose run api alembic upgrade head
in order to create the corresponding tables.
Everything runs smoothly when testing with Postman. I can run GET and POST queries that interact with the database and the are no issues. The problem comes when trying to test the API code using TestClient from FastAPI. I am running two tests:
from fastapi.testclient import TestClient
from app.main import app
client = TestClient(app)
def test_root():
response = client.get("/")
assert response.json().get('message') == "Welcome to my API"
assert response.status_code == 200
def test_create_user():
response = client.post("/users/", json = {"email": "hashed_email1222#gmail.com", "password": "test"})
assert response.status_code == 201
The first one passes without a problem, but the second one fails with the following error message:
FAILED tests/test_users.py::test_create_user - sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: nodename nor servname provided, or not known
I cannot figure out what I am doing wrong. Since there are no issues when testing using Postman, I am not sure if there are any configurations that need to be set in order to use TestClient.
Could someone help me?

PostgreSQL local data not showing in Docker Container

I just want some help here, I'm kinda stuck here in Docker and can't find a way out. First, I'm using Windows for a Django APP and Docker
I'm using PgAdmin4 with PostgreSQL 14 and created a new server for docker
The log for the Postgres Image:
2022-07-16 19:39:23.655 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-07-16 19:39:23.673 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-07-16 19:39:23.673 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-07-16 19:39:23.716 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-07-16 19:39:23.854 UTC [26] LOG: database system was shut down at 2022-07-16 16:50:47 UTC
2022-07-16 19:39:23.952 UTC [1] LOG: database system is ready to accept connections
PostgreSQL Database directory appears to contain a database; Skipping initialization
Log from my image: (you can see that doesn't have migrations)
0 static files copied to '/app/static', 9704 unmodified.
Operations to perform:
Apply all migrations: admin, auth, contenttypes, controle, sessions
Running migrations:
No migrations to apply.
Performing system checks...
System check identified no issues (0 silenced).
July 16, 2022 - 16:40:38
Django version 4.0.6, using settings 'setup.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
My docker-compose (updated):
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER = ${POSTGRES_USER}
- POSTGRES_PASSWORD = ${POSTGRES_PASSWORD}
ports:
- "5432:5432"
web:
build: .
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
links:
- db
environment:
- POSTGRES_NAME=${POSTGRES_NAME:-djangodb}
- POSTGRES_USER=${POSTGRES_USER:-postgre}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgre}
networks:
- django_net
networks:
django_net:
driver: bridge
And my .env file (updated):
SECRET_KEY='django-insecure-1l2oh_bda$#s0w%d!#qyq8-09sn*8)6u-^wb(hx03==(vjk16h'
POSTGRES_NAME=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=mypass
POSTGRES_DB=mydb
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
So, analyzing the logs from Postgres last line, he found my local DB (is that right ?) and didn't initialize, but my superuser is gone and so my data.
Is there something that I'm missing ? Maybe it's like that, and I don't know... Just to be sure, I printed some lines from PGAdmin and the APP Screen
DB:
My APP:
My settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('POSTGRES_NAME'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
'HOST': 'db',
'PORT': 5432,
}
}
If I correct understood your question, you can't connect to created database.
If you want to connect to your containerized docker database from outside, you should define ports parameter in your db service in docker-compose file.
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- "5432:5432"
I hope I correct understood your question about you can't connect to new database and I hope my answer will help you.
In this setup, I see two things:
You've configured DATABASE_URL to point to host.docker.internal, so your container is calling out of Docker space, to whatever's listening on port 5432.
In your Compose file, the db container does not have ports:, so you're not connecting to the database your Compose setup starts.
This implies to me that you're running another copy of the PostgreSQL server on your host, and your application is actually connecting to that. (Maybe you're on a MacOS host, and you installed it via Homebrew?)
You don't need to do any special setup to connect between containers; just use the Compose service name db as a host name. You in particular do not need the special host.docker.internal name here. (You can also delete the networks: from the file, so long as you delete all of them; Compose creates a network named default for you and attaches containers to it automatically.) I might configure this in the Compose file, overriding the .env file
version: '3.8'
services:
db: { ... }
web:
environment:
- DATABASE_URL=postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)#db/$(POSTGRES_DB)
I hope my answer to help you solve the problem. Please change the config as follow:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB:-djangodb}
- POSTGRES_USER = ${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD = ${POSTGRES_PASSWORD:-changeme}
ports:
- "5432:5432"
web:
build: .
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
links:
- db
environment:
- POSTGRES_NAME=${POSTGRES_NAME:-djangodb}
- POSTGRES_USER=${POSTGRES_USER:-postgre}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgre}
networks:
- django_net
networks:
django_net:
driver: bridge

Error asyncpg.exceptions.InvalidPasswordError: password authentication failed for user "steamtrader_pguser"

I am trying to run Telegram bot on Aiogram Python with PostgreSQL database on Ubuntu 20.04 server in Docker using docker-compose. The script runs, but I get
asyncpg.exceptions.InvalidPasswordError: password authentication failed for user "steamtrader_pguser"
Here is my docker-compose.yml file:
version: "3.1"
services:
steamtraderpurchases_db:
container_name: steamtraderpurchases_db
image: sameersbn/postgresql:10-2
environment:
PG_PASSWORD: $PGPASSWORD
restart: always
ports:
- 5432:5432
networks:
- steamtraderpurchases_botnet
volumes:
- ./pgdata:/var/lib/postgresql
steamtraderpurchases_bot:
container_name: steamtraderpurchases
build:
context: .
command: python app.py
restart: always
networks:
- steamtraderpurchases_botnet
env_file:
- ".env"
volumes:
- .:/src
depends_on:
- steamtraderpurchases_db
ports:
- 8443:3001
networks:
steamtraderpurchases_botnet:
driver: bridge
Here is the Dockerfile:
FROM python:3.9.5
WORKDIR /src
COPY requirements.txt /src
RUN pip install -r requirements.txt
COPY . /src
In the .env file I have specified:
DATABASE=steamtrader
PGUSER=steamtrader_pguser
PGPASSWORD=password
DB_HOST=steamtraderpurchases_db
IP=*ip of my server*
The rest of the code is a Telegram bot. Also, I create a database with the required fields using SQLAlchemy and Gino. This is my first time trying to run a bot on the server, so I will be very grateful for your help!
UPD:
I found these lines in the logs when running docker-compose:
FATAL: password authentication failed for user "steamtrader_pguser"
DETAIL: Role "steamtrader_pguser" does not exist.
I tried to create a role but nothing worked
Please see sameersbn/postgresql's doc:
By default the postgres user is not assigned a password and as a result you can only login to the PostgreSQL server locally. If you wish to login remotely to the PostgreSQL server as the postgres user, you will need to assign a password for the user using the PG_PASSWORD variable.
Above means the corresponding user of PG_PASSWORD is postgres, not steamtrader_pguser. To add a new user steamtrader_pguser, you will have to follow next:
A new PostgreSQL database user can be created by specifying the DB_USER and DB_PASS variables while starting the container.
docker run --name postgresql -itd --restart always \
--env 'DB_USER=dbuser' --env 'DB_PASS=dbuserpass' \
sameersbn/postgresql:12-20200524
Above means you need to set DB_USER and DB_PASS in docker-compose.yaml's environment section.

Django on Docker is starting up but browser gives empty response

For a simple app with Django, Python3, Docker on mac
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
CMD python3 manage.py runserver
COPY . /code/
docker-compose.yml
version: "3.9"
services:
# DB
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306"
expose:
# Opens port 3306 on the container
- '3307'
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
# Web app
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Also, what I wanted is to execute the SQL the first time the image is created,
after that database should be mounted.
volumes:
- $HOME/proj/sql/mydbdata.sql:/mydbdata.sql
Looks like the Docker is starting but from my browser, I get this response
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
what is that I am missing. Please help
Looks like your django project is running already when you create image. Since you use command option docker-compose.yml file, you don't need CMD command in Dockerfile in this case.
I would rewrite Dockerfile and docker-compose.yml as follows:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN python3 -m pip install -r requirements.txt
COPY . /code/
version: "3.9"
services:
db:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: '****'
MYSQL_USER: '****'
MYSQL_PASSWORD: '****'
MYSQL_DATABASE: 'mydb'
ports:
- "3307:3306" # make sure django project connects to 3306 port
volumes:
- $HOME/proj/sql:/docker-entrypoint-initdb.d
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
A few things to point out.
When you run docker-compose up, you will probably see an error, because your django project will already be running even before db is initialised.
That's natural. So you need customized command or shell program to force django project to wait to try to connect db.
In my case I would use a custom command.
version: "3.9"
services:
db:
image: mysql:8
env_file:
- .env
command:
- --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "3308:3306"
web:
build: .
command: >
sh -c "python manage.py wait_for_db &&
python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8001:8000"
depends_on:
- db
env_file:
- .env
Next, wait_for_db.py. This file is what I created in myapp/management/commands/wait_for_db.py. With this you postpone db connection until db is ready. This SO post has helped me a lot.
See Writing custom django-admin command for detail.
import time
from django.db import connection
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""Wait to connect to db until db is initialised"""
def handle(self, *args, **options):
start = time.time()
self.stdout.write('Waiting for database...')
while True:
try:
connection.ensure_connection()
break
except OperationalError:
time.sleep(1)
end = time.time()
self.stdout.write(self.style.SUCCESS(f'Database available! Time taken: {end-start:.4f} second(s)'))
Looks like you want to populate your database with sql file when your db container starts running. Mysql docker hub says
Initializing a fresh instance
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
So your .sql file should be located in /docker-entrypoint-initdb.d in your mysql container. See this post for more.
Last but not least, your db is lost when you run docker-compose down, since you don't have volumes other than sql file. It that's not what you want, you might want to consider the following
version: "3.9"
services:
db:
...
volumes:
- data:/var/lib/mysql
...
volumes:
data:

Can not connect PostgreSQL database from docker to python

I am trying to use Postgresql with python. I have used the following docker compose the file.
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: admin_123
POSTGRES_USER: admin
adminer:
image: adminer
restart: always
ports:
- 8080:8080
With the following code, I am trying to connect with the database.
conn = psycopg2.connect(
database = "db_test",
user ="admin",
password = "admin_123",
host = "db"
)
But I am getting this error.
OperationalError: could not translate host name "db" to address:
nodename nor servname provided, or not known
What I am doing wrong ?
You need to expose the BD port in the docker compose like this :
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: admin_123
POSTGRES_USER: admin
ports:
- "5432:5432"
And then connect with localhost:5432
Another possible scenario,
Check if ports have been used or not by other docker container.
Use command:
$ docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
Here is my docker-compose.yml
$ cat docker-compose.yml
version: '3.1' # specify docker-compose version
services:
dockerpgdb:
image: postgres
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_PASSWORD: Password
POSTGRES_DB: dockerpgdb
POSTGRES_USER: abcUser
volumes:
- ./data:/var/lib/postgresql%
Now in PgAdmin4 you can setup a new server as below to test the connection:
host: localhost
port: 5432
maintenance database: postgres
username: abcUser
password: Password

Categories