I am trying to containerise a Python script and MySQL database using Docker. The python script interacts with a program running on the host machine using a TCP connection, so I've set up a "host" network for the Docker containers to allow this. The python script is currently speaking to the program on the host machine fine (TCP comms are as expected). The python script is also communicating with the MySQL database running in the other container fine (no errors from pymysql). When I use the Docker Desktop CLI interface I can see the timestamps on the files in /var/lib/mysql/donuts/*.ibd on the database container updating as the python code pushes info into the tables.
However, my problem is that when I bring both containers down using docker compose down and then bring them up again using docker compose up the information in the database is not persisting. Actually, if I enter the database container using the CLI using mysql -u donuts and then try to manually inspect the tables while the containers are running, both tables are completely empty. I've been going in circles trying to find out why I cannot see the data in the tables even though I see the files in /var/lib/mysql/donuts/*.ibd updating at the same instance the Python container is inserting rows. The data is being stored somewhere while the containers are running, at least temporarily, as the python container is reading from one of the tables and using that information while the containers are alive.
Below are my Dockerfile and docker-compose.yml files and the entire project can be found here. The python code that interacts with the database is here, but I think the issue must be with the Docker setup, rather than the Python code.
Any advice on making the database persistent would be much appreciated, thanks.
version: '3.1'
services:
db:
image: mysql:8.0.25
container_name: db
restart: always
secrets:
- mysql_root
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root
MYSQL_DATABASE: donuts
volumes:
- mysql-data:/var/lib/mysql
- ./mysql-init.sql:/docker-entrypoint-initdb.d/mysql-init.sql
network_mode: "host"
voyager_donuts:
container_name: voyager_donuts
build:
context: .
dockerfile: Dockerfile
image: voyager_donuts
network_mode: "host"
volumes:
- c:/Users/user/Documents/Voyager/DonutsCalibration:/voyager_calibration
- c:/Users/user/Documents/Voyager/DonutsLog:/voyager_log
- c:/Users/user/Documents/Voyager/DonutsData:/voyager_data
- c:/Users/user/Documents/Voyager/DonutsReference:/voyager_reference
volumes:
mysql-data:
secrets:
mysql_root:
file: ./secrets/mysql_root
# get a basic python image
FROM python:3.9-slim-buster
# set up Tini to hand zombie processes etc
ENV TINI_VERSION="v0.19.0"
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# keep setup tools up to date
RUN pip install -U \
pip \
setuptools \
wheel
# set a working directory
WORKDIR /donuts
# make a new user
RUN useradd -m -r donuts && \
chown donuts /donuts
# install requirements first to help with caching
COPY requirements.txt ./
RUN pip install -r requirements.txt
# copy from current dir to workdir
COPY . .
# stop things running as root
USER donuts
# add entry points
ENTRYPOINT ["/tini", "--"]
# start the code once the container is running
CMD python voyager_donuts.py
And of course as soon as I post this I figure out the answer. My database connection context manager was missing the commit() line. Le sigh, I've spent much longer than I care to admit on figuring this out...
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
should have been:
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
conn.commit()
Related
I've been trying to configure my m1 to work with an older ruby on rails api and I think in the process I've broken my ability to connect any of my python apis to their database images in docker running locally.
When I run:
psql -U dev -h localhost database
Instead of the lovely psql blinking cursor allowing me to run any sql statement I'd like I get this error message instaad:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: database "dev" does not exist
I've tried docker-compuse up and down and force recreating and brew uninstalling postgres and reinstalling postgres via brew. I've downloaded the postgres.app dmg and made sure to change it to a different port hoping that that would trigger the steps needed just for psycopg2 to connect to the docker image.
the docker-compose.yaml looks like this:
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
POSTGRES_USER: dev
POSTGRES_HOST_AUTH_METHOD: trust
networks:
default:
aliases:
- postgres
ports:
- 5432:5432
What am I missing and what can I blame ruby on rails for (which works by the way) 🤣
I think it's just docker configuration you need to update
First of all check your existing services in your local machine if the port is used by any other services (Mostly likely ylocal postgres server).
next step is to change your yaml file as below
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
ports:
- 5434:5432
after that you can connect with following command in your cmd
psql -U postgres -h localhost -p 5434
assuming that you have separate yaml file for your python application
if you merge your python code in same yaml file then your connection string will be your service name (db in your case) and port will be 5432
So the answer is pretty simple. What was happening was that I had a third instance of postgres running on my computer that I had not accounted for which was the brew version. Simply running brew services stop postgres and later brew uninstall postgres fixed all my problems with being able to have my ruby on rails api work which rely on "postgres native" on my mac (protip, I changed this one to use port 5431) and my python api work which use a containerized postgres on port 5432 without any headaches. During some intial confusion during my Ruby on Rails setup which required me getting Ruby 2.6.7 running on an m1 mac I must have installed postgres via brew in an attempt to get something like db:create to work.
Absolutely new to Docker and Postgres (I know they're not related in a tight way, but please read on).
I have a simple python script (not a Django project; not a Kivy project - just a .py file). It fetches something and writes it into the Postgres db (using psycopg2). On my (Windows 10) machine, (after a million trial and errors to get this working) it works. So when I docker-compose up the whole project, it does the thing it's supposed to do, and writes it into the Postgres db. After that, when I Docker push the resulting image to the DockerHub, then Docker pull on to a totally unrelated Linux Azure VM, it fails with the following error:
Traceback (most recent call last):
File "/app/file00.py", line 19, in <module>
conn = psycopg2.connect(
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "zedb" to address: Name or service not known
zedb is the name of the Postgres database service in the Docker-compose file (I've pasted it below).
I know I've not something right, but I am not sure what it is.
DockerFile for the script (it's pretty much the default template that VSCode gives you):
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:latest
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "file00.py"]
The db part does not contain a Dockerfile, but an init.sql file that creates the table needed for the script to write into. It is mounted from local to the Postgres image from the docker-compose file. From what I understand, if the container fails/shuts down somehow, the data in the tables is retained (volume persistence) and when the container is spun up again, the table is created. Here's what in the init.sql file:
CREATE TABLE IF NOT EXISTS pt (
serial_num SERIAL,
col1 VARCHAR (40) NOT NULL PRIMARY KEY,
col2 VARCHAR (150) NOT NULL
);
I could be wrong in so many levels about all this, but there's no one to check with, and I am learning this all by myself.
Finally, here's the docker-compose file.
version: '3'
services:
zedb:
image: 'postgres'
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user123!
- POSTGRES_DB=fkpl
- PGDATA=/var/lib/postgresql/data/db-files/
expose:
- 5432
ports:
- 5432:5432
volumes:
- ./db/:/var/lib/postgresql/data/
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
zescript:
build: ./app
volumes:
- ./app:/usr/scr/app
depends_on:
- zedb
Any help is greatly appreciated.
I am using pony.orm to connect to mysqldb using a python code:
db.bind(provider='mysql', user=username, password=password, host='0.0.0.0', database=database)
And when I write the docker compose file:
db:
image: mariadb
ports:
- "3308:3306"
environment:
MYSQL_DATABASE: db
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: ''
How can I pass the hostname to the python program by giving a value (in environment:) in the docker-compose.yml file ?
If I pass the value there can I access the value through os.environ['PARAM'] in the Python code?
Because you've named your service db in the docker-compose.yaml, you can use that as the host, provided you are on the same network:
db.bind(provider='mysql', user=username, password=password, host='db', database=database)
To ensure you are on that network, in your docker-compose.yaml, at the bottom, you'll want:
networks:
default:
external:
name: <your-network>
And you'll need to create that network before running docker-compose up
docker network create <your-network>
This avoids the need for an environment variable, as the container name will be added to the routing table of the network.
You don't need to define your own network, as docker-compose will handle that for you, but if you prefer to be a bit more explicit, it allows you the flexibility to do so. Normally, you would reserve this for multiple compose solutions that you wanted to join together on a single network, which is not the case here.
It's handled in docker-compose the same way you would do it in vanilla docker:
docker run -d -p 3308:3306 --network <your-network> --name db mariadb
docker run -it --network <your-network> ubuntu bash
# in the shell of the ubuntu container
apt-get update && apt-get install iputils-ping -y
ping -c 5 db
# here you will see the results of ping reaching container db
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
Edit
As a note, per #DavidMaze's comment, the port you will be communicating with is 3306, since that's the port that the container is listening on, not 3308.
I have two containers "web" and "db". I have an existing data file in csv format.
The problem is I can initialize the MySQL database with a schema using docker-compose or just run with parameters but how can I import the existing data? I have Python script to parse and filter the data and then insert it to db but I cannot run it in the "db" container due to the single image is MySQL.
Update1
version: '3'
services:
web:
container_name: web
build: .
restart: always
links:
- db
ports:
- "5000:5000"
db:
image: mysql
container_name: db
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_DATABASE: "test"
MYSQL_USER: "test"
MYSQL_PASSWORD: "test"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "33061:3306"
There is a Python script for read data from a csv file and insert them to database, which works fine. Now I want to running the script once the MySQL container is set up. (I have done connection with Python and MySQL in container)
Otherwise, anyone has a better solution to import existing data?
MySQL docker image has the ability to execute shell scripts or sql files if these script/sql files mounted under /docker-entrypoint-initdb.d for a running container as described in here and here. So I suggest you to write an SQL file that reads the CSV file (which you should mount to your container so the sql file can read it) in order to restore it to MySQL maybe something similar to this answer or write a bash script to import csv into mysql whatever works for you.
You can check Initializing a fresh instance at the official dockerhub page for mysql
From Dockerfile, you can call a script (Entrypoint). In this script you can call your python script. For example:
DockerFile:
FROM php:7.2-apache
RUN apt-get update
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
This will run your entrypoint script in the App container. Make sure you've depends on attribute in you app container compose description.
I am trying to run 2 docker containers using docker-compose and connect mysql container to app container.Mysql container is running but app container is failing to start with the error Error:2003: Can't connect to MySQL server on '127.0.0.1:3306' (111 Connection refused)
It seems like my app container is trying to connect my host mysql instead of mysql container.
docker-compose.yml
version: '2'
services:
mysql:
image: mysql:5.7
container_name: database
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: malicious
MYSQL_USER: root
MYSQL_PASSWORD: root
app:
build: .
restart: unless-stopped
volumes:
- .:/Docker_compose_app #app directory
depends_on:
- "mysql"
command: [ "python", "database_update.py"]
restart: unless-restart
environment:
# Environment variables to configure the app on startup.
MYSQL_DATABASE: malicious
MYSQL_HOST: database
Dockerfile
FROM python:2.7
ADD . /Docker_compose_app
WORKDIR /Docker_compose_app
RUN apt-get update
RUN pip install --requirement requirement.txt
This is my database_update.py file.
def create_TB(cursor,connection):
query = '''CREATE TABLE {}(malicious VARCHAR(100) NOT NULL)'''.format("url_lookup")
cursor.execute(query)
connection.commit()
def connection():
try:
cnx = mysql.connector.connect(user="root",password = 'root',database=malicious)
cursor = cnx.cursor()
create_TB(cursor,cnx)
except mysql.connector.errors.Error as err:
data = {"There is an issue in connection to DB":"Error: {}".format(err)}
There are two issues I can see:
Try to add
links:
- mysql:mysql
to the app service in your Docker Compose file. This will make sure that you can reach the mysql container from app. It will set up a hostname mapping (DNS) in your app container, so when you ping mysql from app, it will resolve it to the mysql container's IP address.
In your .py file, where are you defining which host to connect to? Add host="mysql" to the connect call. By default, it will connect to 127.0.0.1, which is what you're seeing.
cnx = mysql.connector.connect(host="mysql", user="root", password = 'root', database=malicious)
Fixing both of these should solve your problem.
You might want to consider using Docker Networks.
I was having a similar problem while having two seperate Python container connecting to one mysql-Container, while those 2 were connected to a Vue-Frontend.
First I tried using links (which was not optimal, because the communication-flow is not entirely linear), just like you but the I ran across this great post:
https://www.cbtnuggets.com/blog/devops/how-to-share-a-mysql-db-with-multiple-docker-containers
Using Networks shift the port mapping off and lets you enhance your overall App-Architecture.
Therefore I think you should try something like:
services:
python-app:
networks:
- network_name
...
mysql:
networks:
- network_name
...
networks:
network_name: