PostgreSQL local data not showing in Docker Container - python

I just want some help here, I'm kinda stuck here in Docker and can't find a way out. First, I'm using Windows for a Django APP and Docker
I'm using PgAdmin4 with PostgreSQL 14 and created a new server for docker
The log for the Postgres Image:
2022-07-16 19:39:23.655 UTC [1] LOG: starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-07-16 19:39:23.673 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-07-16 19:39:23.673 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-07-16 19:39:23.716 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-07-16 19:39:23.854 UTC [26] LOG: database system was shut down at 2022-07-16 16:50:47 UTC
2022-07-16 19:39:23.952 UTC [1] LOG: database system is ready to accept connections
PostgreSQL Database directory appears to contain a database; Skipping initialization
Log from my image: (you can see that doesn't have migrations)
0 static files copied to '/app/static', 9704 unmodified.
Operations to perform:
Apply all migrations: admin, auth, contenttypes, controle, sessions
Running migrations:
No migrations to apply.
Performing system checks...
System check identified no issues (0 silenced).
July 16, 2022 - 16:40:38
Django version 4.0.6, using settings 'setup.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
My docker-compose (updated):
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER = ${POSTGRES_USER}
- POSTGRES_PASSWORD = ${POSTGRES_PASSWORD}
ports:
- "5432:5432"
web:
build: .
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
links:
- db
environment:
- POSTGRES_NAME=${POSTGRES_NAME:-djangodb}
- POSTGRES_USER=${POSTGRES_USER:-postgre}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgre}
networks:
- django_net
networks:
django_net:
driver: bridge
And my .env file (updated):
SECRET_KEY='django-insecure-1l2oh_bda$#s0w%d!#qyq8-09sn*8)6u-^wb(hx03==(vjk16h'
POSTGRES_NAME=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=mypass
POSTGRES_DB=mydb
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
So, analyzing the logs from Postgres last line, he found my local DB (is that right ?) and didn't initialize, but my superuser is gone and so my data.
Is there something that I'm missing ? Maybe it's like that, and I don't know... Just to be sure, I printed some lines from PGAdmin and the APP Screen
DB:
My APP:
My settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('POSTGRES_NAME'),
'USER': os.environ.get('POSTGRES_USER'),
'PASSWORD': os.environ.get('POSTGRES_PASSWORD'),
'HOST': 'db',
'PORT': 5432,
}
}

If I correct understood your question, you can't connect to created database.
If you want to connect to your containerized docker database from outside, you should define ports parameter in your db service in docker-compose file.
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- "5432:5432"
I hope I correct understood your question about you can't connect to new database and I hope my answer will help you.

In this setup, I see two things:
You've configured DATABASE_URL to point to host.docker.internal, so your container is calling out of Docker space, to whatever's listening on port 5432.
In your Compose file, the db container does not have ports:, so you're not connecting to the database your Compose setup starts.
This implies to me that you're running another copy of the PostgreSQL server on your host, and your application is actually connecting to that. (Maybe you're on a MacOS host, and you installed it via Homebrew?)
You don't need to do any special setup to connect between containers; just use the Compose service name db as a host name. You in particular do not need the special host.docker.internal name here. (You can also delete the networks: from the file, so long as you delete all of them; Compose creates a network named default for you and attaches containers to it automatically.) I might configure this in the Compose file, overriding the .env file
version: '3.8'
services:
db: { ... }
web:
environment:
- DATABASE_URL=postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)#db/$(POSTGRES_DB)

I hope my answer to help you solve the problem. Please change the config as follow:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- django_net
environment:
- POSTGRES_DB=${POSTGRES_DB:-djangodb}
- POSTGRES_USER = ${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD = ${POSTGRES_PASSWORD:-changeme}
ports:
- "5432:5432"
web:
build: .
command: >
sh -c "python manage.py collectstatic --noinput &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
links:
- db
environment:
- POSTGRES_NAME=${POSTGRES_NAME:-djangodb}
- POSTGRES_USER=${POSTGRES_USER:-postgre}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:-postgre}
networks:
- django_net
networks:
django_net:
driver: bridge

Related

(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql' ([Errno -3] Temporary failure in name resolution

I am trying to Dockerize a FastAPI app that uses MYSQL and Seleniun.
I am having issues with connecting MYSQL with the FASTAPI app in the Docker.
I have tried to establish connection with MYSQL container using MYSQL Workbench which worked well using 'localhost' as the host. However, when I try to run the fastapi container which should connect with MySqL database, I am having this error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql' ([Errno -3] Temporary failure in name resolution
Here is docker-compose.yml:
version: '3'
services:
chrome:
build: .
image: selenium/node-chrome:3.141.59-20210929
ports:
- "4444:4444"
- "5900:5900"
volumes:
- "/dev/shm:/dev/shm"
networks:
- selenium
mysql:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=autojob
- MYSQL_USER=user
- MYSQL_PASSWORD=4444
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- ./init:/docker-entrypoint-initdb.d
- autojob:/var/lib/mysql
ports:
- "3307:3306"
expose:
- "3307"
app:
build: .
restart: on-failure
container_name: "autojobserve_container"
command:
uvicorn autojobserve.app:app --host 0.0.0.0 --port 8000 --reload
ports:
- 8000:8000
volumes:
- "./:/app"
networks:
- selenium
depends_on:
mysql:
condition: service_healthy
volumes:
autojob: {}
networks:
selenium:
Here is the line that connects with MYSQL in FastAPI:
engine = create_engine("mysql+pymysql://user:4444#mysql:3307/autojob")
In DockerDesktop, it shows that Mysql container is ready for connection too:
2022-11-08T11:49:26.334069Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2022-11-08T11:49:26.334869Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
2022-11-08 11:49:14+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
2022-11-08 11:49:14+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2022-11-08 11:49:14+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
'/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
What possibly could be wrong?
Note: Everything works well before dockerizing.
Your app container declares networks: [selenium]. The mysql container doesn't have a networks: block at all, so Compose automatically inserts networks: [default]. Since the two containers aren't on the same Docker network they can't communicate with each other, and one of the ways you see that is with the DNS-resolution issue you're seeing.
The setup I'd recommend here is to delete all of the networks: blocks in the whole file. Compose will automatically create the default network and attach containers to it, and for most applications this is a correct setup.
(You also do not need the obsolete expose: option, or to manually specify container_name:. You should not need to use volumes: to inject code into your container or command: either, the code and its default command should generally be specified in the Dockerfile.)

Access mariadb docker-compose container from host-machine

I'm trying to access a mariadb-container from a python script on my host-machine (MacOS).
I tried all network_modes (host, bridge, default), but nothing works.
I was able to connect to the container through phpmyadmin, but only if both containers are in the same docker-compose-network.
Here is my docker-compose.yml with the attempt on network_mode host:
version: '3.9'
services:
mariadb:
image: mariadb:10.9.1-rc
container_name: mariadb
network_mode: bridge
ports:
- 3306:3306
volumes:
- ...
environment:
- MYSQL_ROOT_PASSWORD=mysqlroot
- MYSQL_PASSWORD=mysqlpw
- MYSQL_USER=test
- MYSQL_DATABASE=test1
- TZ=Europe/Berlin
phpmyadmin:
image: phpmyadmin:5.2.0
network_mode: bridge
container_name: pma
# links:
# - mariadb
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
- TZ=Europe/Berlin
ports:
- 8081:80
Any tips on how I get access to the container through the python mariadb package?
Thanks!
Every thing seems okay, just check the params when trying to connect to the db:
host: 0.0.0.0
port: 3306 (as in the docker-compose)
user: test (as in the docker-compose)
password: mysqlpw (as in the docker-compose)
database: test1 (as in the docker-compose)
example:
db = MySQLdb.connect("0.0.0.0","test","mysqlpw","test1")

Cron in docker container with django

I have django project and make managment command for django. This command delete some objects from db. So If I run it manually "docker-compose exec web python manage.py mycommand" it works fine, but when I try add task into cron using crontab (cron located in django container):
*/1 * * * * path/to/python path/to/manage.py mycommand >> cron.log 2>&1
django raise:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "0.0.0.0" and accepting
TCP/IP connections on port 5432?
Tasks without db connection work fine.
Problem is that tasks executed by cron can`t connect to db. Any ideas how to change crontab file or docker-compose.yml?
docker-compose.yml
version: '3'
services:
web:
container_name: web
env_file:
- .env.dev
build:
context: ./
dockerfile: Dockerfile
volumes:
- .:/domspass
- "/var/run/docker.sock:/var/run/docker.sock"
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13.2
container_name: db
restart: always
env_file:
- db.env.dev
volumes:
- ./postgres:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
postgres:

Django server inside docker is causing robot tests to give blank screens?

I have built an environment inside docker compose in order to run robot tests. The environment consists of django web app, postgres and robot framework container. The Problem I have is that I get many blank screens in different tests, while using external Django web app instance which is installed on a virtual machine doesn't have this problem.
The blank screen causes that elements are not found hence so many failures:
JavascriptException: Message: javascript error: Cannot read property 'get' of undefined
(Session info: headless chrome=84.0.4147.89)
I am sure that the problem is with the Django app container itself not robot container since as said above I have tested with the same environment but against different web app which is installed outside Docker, and it worked.
docker-compose.yml:
version: "3.6"
services:
redis:
image: redis:3.2
ports:
- 6379
networks:
local:
ipv4_address: 10.0.0.20
smtpd:
image: mysmtpd:1.0.5
ports:
- 25
networks:
- local
postgres:
image: mypostgres
build:
context: ../dias-postgres/
args:
VERSION: ${POSTGRES_TAG:-12}
hostname: "postgres"
environment:
POSTGRES_DB: ${POSTGRES_USER}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
networks:
local:
ipv4_address: 10.0.0.100
ports:
- 5432
volumes:
- my-postgres:/var/lib/postgresql/data
app:
image: mypyenv:${PYENV_TAG:-1.1}
tty: true
stdin_open: true
user: ${MY_USER:-jenkins}
networks:
local:
ipv4_address: 10.0.0.50
hostname: "app"
ports:
- 8000
volumes:
- ${WORKSPACE}:/app
environment:
ALLOW_HOST: "*"
PGHOST: postgres
PGUSER: ${POSTGRES_USER}
PGDATABASE: ${POSTGRES_USER}
PGPASSWORD: ${POSTGRES_PASSWORD}
ANONYMIZE: "false"
REDIS_HOST: redis
REDIS_DB: 2
APP_PATH: ${APP_PATH}
APP: ${MANDANT}
TIMER: ${TIMER:-20}
EMAIL_BACKEND: "dias.core.log.mail.SmtpEmailBackend"
EMAIL_HOST: "smtpd"
EMAIL_PORT: "25"
robot:
image: myrobot:${ROBOT_TAG:-1.0.9}
user: ${ROBOT_USER:-jenkins}
networks:
local:
ipv4_address: 10.0.0.70
volumes:
- ${WORKSPACE}:/app
- ${ROBOT_REPORTS_PATH}:/APP_Robot_Reports
environment:
APP_ROBOT: ${APP_ROBOT}
TIMER: ${TIMER:-20}
PGHOST: postgres
PGUSER: ${POSTGRES_USER}
PGDATABASE: ${POSTGRES_USER}
PGPASSWORD: ${POSTGRES_PASSWORD}
THREADS: ${THREADS:-4}
tty: true
stdin_open: true
entrypoint: start-robot
networks:
local:
driver: bridge
ipam:
config:
- subnet: 10.0.0.0/24
volumes:
my-postgres:
external: true
name: my-postgres
I have monitored the app stats and nothing is abnormal during testing. Also, manually tested the app in browser and it looks just good with nothing wrong about it.
Note: There is no mismatch between chromedriver and google chrome version (anyway this doesn't matter since the same robot container has worked with other instance where no Docker is used for the Django app)
Anyone has an idea ?
I didn't focus before that I run pabot with 8 processes while django app was started with 2 celery workers. As soon as I increased celery workers to 4 it worked. Not sure though if this is the actually cause but it made sense to me as well as it worked.
celery -A server -c ${CELERY_CONCURRENCY:-2} worker

How can I set path to load data from CSV file into PostgreSQL database in Docker container?

I would like to load data from CSV file into PostgreSQL database in Docker.
I run:
docker exec -ti my project_db_1 psql -U postgres
Then I select my database:
\c myDatabase
Now I try to load data from myfile.csv which is in the main directory of the Django project into backend_data table:
\copy backend_data (t, sth1, sth2) FROM 'myfile.csv' CSV HEADER;
However I get error:
myfile.csv: No such file or directory
It seems to me that I tried every possible path and nothing works. Any ideas how can I solve it? This is my docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
django:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
The easiest way is to mount a directory into the postgres container, place the file into the mounted directory, and reference it there.
We are actually mounting the pgdata directory, to be sure that the postgres data lives even if we recreate the postgres docker container. So, my example will also use pgdata:
services:
db:
image: postgres
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
volumes:
- "<path_to_local_pgdata>:/var/lib/postgresql/data/pgdata"
Place myfile.csv into <path_to_local_pgdata> (relative to directory containing the config or absolute path). The copy command then looks like this:
\copy backend_data (t, sth1, sth2) FROM '/var/lib/postgresql/data/pgdata/myfile.csv' CSV HEADER;
you need to mount the path of the myfile.csv in the db container if you are running the command in that container.
you might have mounted the file only in django service.
possible docker-compose.yml
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
volumes:
- <path_to_csv_in_local>:<path_of_csv_in_db_container>
django:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
you haven't created a mount of your db. this will act fatal once you close your database container (you will lose all your data). postgresql container stores data in /var/lib/postgresql/data. you need to mount this this path to your local system to maintain the data even if the container closes.
volumes:
- <path_of_db_in_local_system>:/var/lib/postgresql/data

Categories