Can't connect to my Docker Postgres from python suddenly - python

I've been trying to configure my m1 to work with an older ruby on rails api and I think in the process I've broken my ability to connect any of my python apis to their database images in docker running locally.
When I run:
psql -U dev -h localhost database
Instead of the lovely psql blinking cursor allowing me to run any sql statement I'd like I get this error message instaad:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: database "dev" does not exist
I've tried docker-compuse up and down and force recreating and brew uninstalling postgres and reinstalling postgres via brew. I've downloaded the postgres.app dmg and made sure to change it to a different port hoping that that would trigger the steps needed just for psycopg2 to connect to the docker image.
the docker-compose.yaml looks like this:
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
POSTGRES_USER: dev
POSTGRES_HOST_AUTH_METHOD: trust
networks:
default:
aliases:
- postgres
ports:
- 5432:5432
What am I missing and what can I blame ruby on rails for (which works by the way) 🤣

I think it's just docker configuration you need to update
First of all check your existing services in your local machine if the port is used by any other services (Mostly likely ylocal postgres server).
next step is to change your yaml file as below
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
ports:
- 5434:5432
after that you can connect with following command in your cmd
psql -U postgres -h localhost -p 5434
assuming that you have separate yaml file for your python application
if you merge your python code in same yaml file then your connection string will be your service name (db in your case) and port will be 5432

So the answer is pretty simple. What was happening was that I had a third instance of postgres running on my computer that I had not accounted for which was the brew version. Simply running brew services stop postgres and later brew uninstall postgres fixed all my problems with being able to have my ruby on rails api work which rely on "postgres native" on my mac (protip, I changed this one to use port 5431) and my python api work which use a containerized postgres on port 5432 without any headaches. During some intial confusion during my Ruby on Rails setup which required me getting Ruby 2.6.7 running on an m1 mac I must have installed postgres via brew in an attempt to get something like db:create to work.

Related

Docker Compose with Django and Postgres Fails with "django.db.utils.OperationalError: could not connect to server:" [duplicate]

This question already has an answer here:
Docker-compose: App container can't connect to Postgres
(1 answer)
Closed yesterday.
I am trying to run my django/postgres application with docker compose. When I run docker compose up -d I get the the following logs on my postgres container running on port 5432:
2023-02-18 00:10:25.049 UTC [1] LOG: starting PostgreSQL 13.8 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2023-02-18 00:10:25.049 UTC [1] LOG: listening on IPv6 address "::", port 5432
2023-02-18 00:10:25.052 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-02-18 00:10:25.054 UTC [22] LOG: database system was shut down at 2023-02-18 00:10:06 UTC
2023-02-18 00:10:25.056 UTC [1] LOG: database system is ready to accept connections
It appears my postgres container is working properly. However my python/django container has the following logs:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
docker-compose.yml
version: '3.8'
services:
web:
build: .
command: sh -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- ./xi/.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_DB=***
- POSTGRES_USER=***
- POSTGRES_PASSWORD=***
volumes:
- dev-db-data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
dev-db-data:
Dockerfile:
FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install nano python3-dev libpq-dev -y
COPY requirements-prod.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
I must be missing something small that allows the python container to communicate with the postgres container.
Also a few additional questions:
What does it mean that the container is "listening on IPv4 address '0.0.0.0', port 5432"? To my understanding 0.0.0.0 encapsulates all ip addresses including 127.0.0.1. So, in this case that shouldn't be an issue (correct me if I'm wrong)
I have been struggling with this for a few days. I have followed the getting started docs as well as the python usage guides on the docker docs, and it appears that I feel that I understand everything, but I am unable to debug my containers efficiently. What additional supplemental knowledge can I acquire that helps me debug a container with the same level of comfortability as I would a python script?
I tried a few things:
swapping env_file with the credentials hard coded in
changing python with python3
removing sh -c
I tried building my database first with docker-compose up -d --build db and then building my web app with docker-compose up -d --build web and the issue persisted.
I tried everything with the environment variables and it appears improper credentials is not the issue. Running python manage.py runserver without docker it successfully connects to the database. There are some similar stack overflow questions, but I have tried their solutions and they do not work.
Part of my issue is I don't know what to try and how to efficiently debug docker containers yet (hence the question above).
What have you set as your HOST variable in DATABASES['default'] in settings.py? If it's '127.0.0.1', try changing to 'db' to match the container service name.

Exception has occurred: OperationalError could not translate host name "db" to address: Unknown host

I am running locally this script. On postgres connection, I am facing "Exception has occurred: OperationalError
could not translate host name "db" to address: Unknown host". Database is up, I started it with
docker run --publish 8000:8000 --detach --name db REMOTEURL:latest
When I do the same using docker-compose, and exec to django app and run it as management command, it works. The code:
conn_string = "host='db' dbname='taras' user='postgres' password='postgres'"
with psycopg2.connect(conn_string) as connection:
with connection.cursor() as cursor:
... my stuff
I dont know why when accessing docker container from local script on my computer, docker name (url) is never recognized. Tried "localhost" instead of "db" also. When running python script inside docker container, it has no problem to recognize other docker container (db in my case). What am I missing?
EDIT: Just to clarify, I am running only database in docker. Python script is started locally, using in my windows command line
python myscript.py
Docker-compose file:
version: "2"
services:
db:
image: REMOTEURL:latest
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
This probably happens because when you launch containers, they can resolve hostnames based the names you give them internally, but if you wanna access a container from the outside (I'm assuming you wanna access your postgres db), you also need to open the port for the postgres container by adding -p 5432:5432 to your postgres container. If you want to access postgres from another container on the same network, connect with db:5432, and from the outside with localhost:5432
Edit:
Try this as your docker-compose.yml
version: "2"
services:
db:
image: REMOTEURL:latest
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- 5432:5432

running python and mysql with docker error - cannot connect

I am using mysql-connector, when ever I run the container using docker I get this error:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'db:3306' (-5 No address associated with hostname)
but when I run the project using python only it executes with no errors
I want to use phpmyadmin only for the database please help.
To create a docker from your linux machine:
docker pull mysql:latest
To run it and mount a persistent folder with port access on 3306 (std port for mysql):
docker run --name=mysql_dockerdb --env="MYSQL_ROOT_PASSWORD=<your_password>" -p 3306:3306 -v /home/ubuntu/sql_db/<your_dbasename>:/var/lib/mysql -d mysql:latest
To connect to the docker instance so that you can create the database within the docker:
docker exec -it mysql_dockerdb mysql -uroot -p<your_password>
My SQL code to establish the database:
CREATE DATABASE dockerdb;
CREATE USER 'newuser'#'%' IDENTIFIED BY 'newpassword';
GRANT ALL PRIVILEGES ON dockerdb.* to 'newuser'#'%';
ALTER USER 'username'#'%' IDENTIFIED WITH mysql_native_password BY 'userpassword';
You will now have a docker running with a persistent SQL database. You connect to it from your Python code. I am running Flask mySql. You will want to keep your passwords in environment variables. I am using a Mac so therefore my ~/.bash_profile contains:
export RDS_LOGIN="mysql+pymysql://<username>:<userpassword>#<dockerhost_ip>/dockerdb"
Within Python:
import os
SQLALCHEMY_DATABASE_URI = os.environ.get('RDS_LOGIN')
And at that point you should be able to connect in your usual Python manner. Note that I've glossed over any security aspects on the presumption this is local behind a firewall.

Connect to MySQL in Docker container using Python

I am using pony.orm to connect to mysqldb using a python code:
db.bind(provider='mysql', user=username, password=password, host='0.0.0.0', database=database)
And when I write the docker compose file:
db:
image: mariadb
ports:
- "3308:3306"
environment:
MYSQL_DATABASE: db
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: ''
How can I pass the hostname to the python program by giving a value (in environment:) in the docker-compose.yml file ?
If I pass the value there can I access the value through os.environ['PARAM'] in the Python code?
Because you've named your service db in the docker-compose.yaml, you can use that as the host, provided you are on the same network:
db.bind(provider='mysql', user=username, password=password, host='db', database=database)
To ensure you are on that network, in your docker-compose.yaml, at the bottom, you'll want:
networks:
default:
external:
name: <your-network>
And you'll need to create that network before running docker-compose up
docker network create <your-network>
This avoids the need for an environment variable, as the container name will be added to the routing table of the network.
You don't need to define your own network, as docker-compose will handle that for you, but if you prefer to be a bit more explicit, it allows you the flexibility to do so. Normally, you would reserve this for multiple compose solutions that you wanted to join together on a single network, which is not the case here.
It's handled in docker-compose the same way you would do it in vanilla docker:
docker run -d -p 3308:3306 --network <your-network> --name db mariadb
docker run -it --network <your-network> ubuntu bash
# in the shell of the ubuntu container
apt-get update && apt-get install iputils-ping -y
ping -c 5 db
# here you will see the results of ping reaching container db
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
Edit
As a note, per #DavidMaze's comment, the port you will be communicating with is 3306, since that's the port that the container is listening on, not 3308.

Docker cannot connect application to MySQL

I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.

Categories