Connect to MySQL in Docker container using Python - python

I am using pony.orm to connect to mysqldb using a python code:
db.bind(provider='mysql', user=username, password=password, host='0.0.0.0', database=database)
And when I write the docker compose file:
db:
image: mariadb
ports:
- "3308:3306"
environment:
MYSQL_DATABASE: db
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: ''
How can I pass the hostname to the python program by giving a value (in environment:) in the docker-compose.yml file ?
If I pass the value there can I access the value through os.environ['PARAM'] in the Python code?

Because you've named your service db in the docker-compose.yaml, you can use that as the host, provided you are on the same network:
db.bind(provider='mysql', user=username, password=password, host='db', database=database)
To ensure you are on that network, in your docker-compose.yaml, at the bottom, you'll want:
networks:
default:
external:
name: <your-network>
And you'll need to create that network before running docker-compose up
docker network create <your-network>
This avoids the need for an environment variable, as the container name will be added to the routing table of the network.
You don't need to define your own network, as docker-compose will handle that for you, but if you prefer to be a bit more explicit, it allows you the flexibility to do so. Normally, you would reserve this for multiple compose solutions that you wanted to join together on a single network, which is not the case here.
It's handled in docker-compose the same way you would do it in vanilla docker:
docker run -d -p 3308:3306 --network <your-network> --name db mariadb
docker run -it --network <your-network> ubuntu bash
# in the shell of the ubuntu container
apt-get update && apt-get install iputils-ping -y
ping -c 5 db
# here you will see the results of ping reaching container db
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
Edit
As a note, per #DavidMaze's comment, the port you will be communicating with is 3306, since that's the port that the container is listening on, not 3308.

Related

Can't connect to my Docker Postgres from python suddenly

I've been trying to configure my m1 to work with an older ruby on rails api and I think in the process I've broken my ability to connect any of my python apis to their database images in docker running locally.
When I run:
psql -U dev -h localhost database
Instead of the lovely psql blinking cursor allowing me to run any sql statement I'd like I get this error message instaad:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: database "dev" does not exist
I've tried docker-compuse up and down and force recreating and brew uninstalling postgres and reinstalling postgres via brew. I've downloaded the postgres.app dmg and made sure to change it to a different port hoping that that would trigger the steps needed just for psycopg2 to connect to the docker image.
the docker-compose.yaml looks like this:
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
POSTGRES_USER: dev
POSTGRES_HOST_AUTH_METHOD: trust
networks:
default:
aliases:
- postgres
ports:
- 5432:5432
What am I missing and what can I blame ruby on rails for (which works by the way) 🤣
I think it's just docker configuration you need to update
First of all check your existing services in your local machine if the port is used by any other services (Mostly likely ylocal postgres server).
next step is to change your yaml file as below
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
ports:
- 5434:5432
after that you can connect with following command in your cmd
psql -U postgres -h localhost -p 5434
assuming that you have separate yaml file for your python application
if you merge your python code in same yaml file then your connection string will be your service name (db in your case) and port will be 5432
So the answer is pretty simple. What was happening was that I had a third instance of postgres running on my computer that I had not accounted for which was the brew version. Simply running brew services stop postgres and later brew uninstall postgres fixed all my problems with being able to have my ruby on rails api work which rely on "postgres native" on my mac (protip, I changed this one to use port 5431) and my python api work which use a containerized postgres on port 5432 without any headaches. During some intial confusion during my Ruby on Rails setup which required me getting Ruby 2.6.7 running on an m1 mac I must have installed postgres via brew in an attempt to get something like db:create to work.

Persisting mysql database with docker

I am trying to containerise a Python script and MySQL database using Docker. The python script interacts with a program running on the host machine using a TCP connection, so I've set up a "host" network for the Docker containers to allow this. The python script is currently speaking to the program on the host machine fine (TCP comms are as expected). The python script is also communicating with the MySQL database running in the other container fine (no errors from pymysql). When I use the Docker Desktop CLI interface I can see the timestamps on the files in /var/lib/mysql/donuts/*.ibd on the database container updating as the python code pushes info into the tables.
However, my problem is that when I bring both containers down using docker compose down and then bring them up again using docker compose up the information in the database is not persisting. Actually, if I enter the database container using the CLI using mysql -u donuts and then try to manually inspect the tables while the containers are running, both tables are completely empty. I've been going in circles trying to find out why I cannot see the data in the tables even though I see the files in /var/lib/mysql/donuts/*.ibd updating at the same instance the Python container is inserting rows. The data is being stored somewhere while the containers are running, at least temporarily, as the python container is reading from one of the tables and using that information while the containers are alive.
Below are my Dockerfile and docker-compose.yml files and the entire project can be found here. The python code that interacts with the database is here, but I think the issue must be with the Docker setup, rather than the Python code.
Any advice on making the database persistent would be much appreciated, thanks.
version: '3.1'
services:
db:
image: mysql:8.0.25
container_name: db
restart: always
secrets:
- mysql_root
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root
MYSQL_DATABASE: donuts
volumes:
- mysql-data:/var/lib/mysql
- ./mysql-init.sql:/docker-entrypoint-initdb.d/mysql-init.sql
network_mode: "host"
voyager_donuts:
container_name: voyager_donuts
build:
context: .
dockerfile: Dockerfile
image: voyager_donuts
network_mode: "host"
volumes:
- c:/Users/user/Documents/Voyager/DonutsCalibration:/voyager_calibration
- c:/Users/user/Documents/Voyager/DonutsLog:/voyager_log
- c:/Users/user/Documents/Voyager/DonutsData:/voyager_data
- c:/Users/user/Documents/Voyager/DonutsReference:/voyager_reference
volumes:
mysql-data:
secrets:
mysql_root:
file: ./secrets/mysql_root
# get a basic python image
FROM python:3.9-slim-buster
# set up Tini to hand zombie processes etc
ENV TINI_VERSION="v0.19.0"
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# keep setup tools up to date
RUN pip install -U \
pip \
setuptools \
wheel
# set a working directory
WORKDIR /donuts
# make a new user
RUN useradd -m -r donuts && \
chown donuts /donuts
# install requirements first to help with caching
COPY requirements.txt ./
RUN pip install -r requirements.txt
# copy from current dir to workdir
COPY . .
# stop things running as root
USER donuts
# add entry points
ENTRYPOINT ["/tini", "--"]
# start the code once the container is running
CMD python voyager_donuts.py
And of course as soon as I post this I figure out the answer. My database connection context manager was missing the commit() line. Le sigh, I've spent much longer than I care to admit on figuring this out...
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
should have been:
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
conn.commit()

Docker cannot connect application to MySQL

I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.

Docker compose mysql connection failing

I am trying to run 2 docker containers using docker-compose and connect mysql container to app container.Mysql container is running but app container is failing to start with the error Error:2003: Can't connect to MySQL server on '127.0.0.1:3306' (111 Connection refused)
It seems like my app container is trying to connect my host mysql instead of mysql container.
docker-compose.yml
version: '2'
services:
mysql:
image: mysql:5.7
container_name: database
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: malicious
MYSQL_USER: root
MYSQL_PASSWORD: root
app:
build: .
restart: unless-stopped
volumes:
- .:/Docker_compose_app #app directory
depends_on:
- "mysql"
command: [ "python", "database_update.py"]
restart: unless-restart
environment:
# Environment variables to configure the app on startup.
MYSQL_DATABASE: malicious
MYSQL_HOST: database
Dockerfile
FROM python:2.7
ADD . /Docker_compose_app
WORKDIR /Docker_compose_app
RUN apt-get update
RUN pip install --requirement requirement.txt
This is my database_update.py file.
def create_TB(cursor,connection):
query = '''CREATE TABLE {}(malicious VARCHAR(100) NOT NULL)'''.format("url_lookup")
cursor.execute(query)
connection.commit()
def connection():
try:
cnx = mysql.connector.connect(user="root",password = 'root',database=malicious)
cursor = cnx.cursor()
create_TB(cursor,cnx)
except mysql.connector.errors.Error as err:
data = {"There is an issue in connection to DB":"Error: {}".format(err)}
There are two issues I can see:
Try to add
links:
- mysql:mysql
to the app service in your Docker Compose file. This will make sure that you can reach the mysql container from app. It will set up a hostname mapping (DNS) in your app container, so when you ping mysql from app, it will resolve it to the mysql container's IP address.
In your .py file, where are you defining which host to connect to? Add host="mysql" to the connect call. By default, it will connect to 127.0.0.1, which is what you're seeing.
cnx = mysql.connector.connect(host="mysql", user="root", password = 'root', database=malicious)
Fixing both of these should solve your problem.
You might want to consider using Docker Networks.
I was having a similar problem while having two seperate Python container connecting to one mysql-Container, while those 2 were connected to a Vue-Frontend.
First I tried using links (which was not optimal, because the communication-flow is not entirely linear), just like you but the I ran across this great post:
https://www.cbtnuggets.com/blog/devops/how-to-share-a-mysql-db-with-multiple-docker-containers
Using Networks shift the port mapping off and lets you enhance your overall App-Architecture.
Therefore I think you should try something like:
services:
python-app:
networks:
- network_name
...
mysql:
networks:
- network_name
...
networks:
network_name:

Connecting from psycopg2 on local machine to PostgreSQL db on Docker

I have used the following commands to create a Docker image with Postgres running on it:
docker pull postgres
docker run --name test-db -e POSTGRES_PASSWORD=my_secret_password -d postgres
I then created a table called test and inserted some random data into a couple of rows.
I am now trying to make a connection to this database table through psycopg2 in Python on my local machine.
I used the command docker-machine ip default to find out the IP address of the machine as 192.168.99.100 and am using the following to try and connect:
conn = psycopg2.connect("dbname='test-db' user='postgres' host='192.168.99.100' password='my_secret_password' port='5432'")
This is not working with the error message of "OperationalError: could not connect to server: Connection refused (0x0000274D/10061)"
.
.
Everything seems to be in order so I can't think why this would be refused.
According to the documentation for this postgres image, (at https://hub.docker.com/_/postgres/) this image includes EXPOSE 5432 (the postgres port) and the default username is postgres.
I also tried to get the IP address of the image itself with docker inspect test-db | grep IPAddress | awk 'print{$2}' | tr -d '",' that I found on SA to a slightly related article, but that IP address didn't work either.
The EXPOSE instruction may not be doing what you expect. It is used for links and inter-container communication inside the Docker network. When connecting to a container from outside the Docker bridge network you need to publish to port with -p. Try adding -p 5432:5432 to your docker run command so that it looks like:
docker run --name test-db -e POSTGRES_PASSWORD=my_secret_password -d -p 5432:5432 postgres
Here is a decent explanation of the differences between publish and exposed ports: https://stackoverflow.com/a/22150099/684908. Hope this helps!

Categories