Docker Compose Multiple Containers - python

I have a Python script which connects to MySql and inserts data into a database.That is the one container, I want to build. I want to build another container which will have a Python script which will connect to the database of the first container and execute some queries. I am trying to follow the documentation of Docker, however I find it difficult to make the proper yml file. Any guidance will be very helpful.

It depends on how complex is what you want to make, but the docker-compose.yml file should look similar to this:
version: '3'
services:
my_database:
image: mysql
[... MySQL configs]
my_python_container:
build: .
depends_on:
- my_database
links:
- my_database
I have not knowledge about the configuration of the MySQL database so I left that part blank ([... MySQL configs]).
The my_python_conainer is in a Dockerfile in the same folder, similar to:
FROM python
COPY script.py script.py
CMD python script.py
This should be enough to get the connection, but you have to consider in your program that the mysql hostname will be the name given to the container.

Related

Persisting mysql database with docker

I am trying to containerise a Python script and MySQL database using Docker. The python script interacts with a program running on the host machine using a TCP connection, so I've set up a "host" network for the Docker containers to allow this. The python script is currently speaking to the program on the host machine fine (TCP comms are as expected). The python script is also communicating with the MySQL database running in the other container fine (no errors from pymysql). When I use the Docker Desktop CLI interface I can see the timestamps on the files in /var/lib/mysql/donuts/*.ibd on the database container updating as the python code pushes info into the tables.
However, my problem is that when I bring both containers down using docker compose down and then bring them up again using docker compose up the information in the database is not persisting. Actually, if I enter the database container using the CLI using mysql -u donuts and then try to manually inspect the tables while the containers are running, both tables are completely empty. I've been going in circles trying to find out why I cannot see the data in the tables even though I see the files in /var/lib/mysql/donuts/*.ibd updating at the same instance the Python container is inserting rows. The data is being stored somewhere while the containers are running, at least temporarily, as the python container is reading from one of the tables and using that information while the containers are alive.
Below are my Dockerfile and docker-compose.yml files and the entire project can be found here. The python code that interacts with the database is here, but I think the issue must be with the Docker setup, rather than the Python code.
Any advice on making the database persistent would be much appreciated, thanks.
version: '3.1'
services:
db:
image: mysql:8.0.25
container_name: db
restart: always
secrets:
- mysql_root
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root
MYSQL_DATABASE: donuts
volumes:
- mysql-data:/var/lib/mysql
- ./mysql-init.sql:/docker-entrypoint-initdb.d/mysql-init.sql
network_mode: "host"
voyager_donuts:
container_name: voyager_donuts
build:
context: .
dockerfile: Dockerfile
image: voyager_donuts
network_mode: "host"
volumes:
- c:/Users/user/Documents/Voyager/DonutsCalibration:/voyager_calibration
- c:/Users/user/Documents/Voyager/DonutsLog:/voyager_log
- c:/Users/user/Documents/Voyager/DonutsData:/voyager_data
- c:/Users/user/Documents/Voyager/DonutsReference:/voyager_reference
volumes:
mysql-data:
secrets:
mysql_root:
file: ./secrets/mysql_root
# get a basic python image
FROM python:3.9-slim-buster
# set up Tini to hand zombie processes etc
ENV TINI_VERSION="v0.19.0"
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
# keep setup tools up to date
RUN pip install -U \
pip \
setuptools \
wheel
# set a working directory
WORKDIR /donuts
# make a new user
RUN useradd -m -r donuts && \
chown donuts /donuts
# install requirements first to help with caching
COPY requirements.txt ./
RUN pip install -r requirements.txt
# copy from current dir to workdir
COPY . .
# stop things running as root
USER donuts
# add entry points
ENTRYPOINT ["/tini", "--"]
# start the code once the container is running
CMD python voyager_donuts.py
And of course as soon as I post this I figure out the answer. My database connection context manager was missing the commit() line. Le sigh, I've spent much longer than I care to admit on figuring this out...
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
should have been:
#contextmanager
def db_cursor(host='127.0.0.1', port=3306, user='donuts',
password='', db='donuts'):
"""
Grab a database cursor
"""
with pymysql.connect(host=host, \
port=port, \
user=user, \
password=password, \
db=db) as conn:
with conn.cursor() as cur:
yield cur
conn.commit()

Custom Docker image fails on another machine (psycopg2.OperationalError: could not translate host name to address)

Absolutely new to Docker and Postgres (I know they're not related in a tight way, but please read on).
I have a simple python script (not a Django project; not a Kivy project - just a .py file). It fetches something and writes it into the Postgres db (using psycopg2). On my (Windows 10) machine, (after a million trial and errors to get this working) it works. So when I docker-compose up the whole project, it does the thing it's supposed to do, and writes it into the Postgres db. After that, when I Docker push the resulting image to the DockerHub, then Docker pull on to a totally unrelated Linux Azure VM, it fails with the following error:
Traceback (most recent call last):
File "/app/file00.py", line 19, in <module>
conn = psycopg2.connect(
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "zedb" to address: Name or service not known
zedb is the name of the Postgres database service in the Docker-compose file (I've pasted it below).
I know I've not something right, but I am not sure what it is.
DockerFile for the script (it's pretty much the default template that VSCode gives you):
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:latest
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "file00.py"]
The db part does not contain a Dockerfile, but an init.sql file that creates the table needed for the script to write into. It is mounted from local to the Postgres image from the docker-compose file. From what I understand, if the container fails/shuts down somehow, the data in the tables is retained (volume persistence) and when the container is spun up again, the table is created. Here's what in the init.sql file:
CREATE TABLE IF NOT EXISTS pt (
serial_num SERIAL,
col1 VARCHAR (40) NOT NULL PRIMARY KEY,
col2 VARCHAR (150) NOT NULL
);
I could be wrong in so many levels about all this, but there's no one to check with, and I am learning this all by myself.
Finally, here's the docker-compose file.
version: '3'
services:
zedb:
image: 'postgres'
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user123!
- POSTGRES_DB=fkpl
- PGDATA=/var/lib/postgresql/data/db-files/
expose:
- 5432
ports:
- 5432:5432
volumes:
- ./db/:/var/lib/postgresql/data/
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
zescript:
build: ./app
volumes:
- ./app:/usr/scr/app
depends_on:
- zedb
Any help is greatly appreciated.

Dockerfile create image with both python and mysql

I have two containers "web" and "db". I have an existing data file in csv format.
The problem is I can initialize the MySQL database with a schema using docker-compose or just run with parameters but how can I import the existing data? I have Python script to parse and filter the data and then insert it to db but I cannot run it in the "db" container due to the single image is MySQL.
Update1
version: '3'
services:
web:
container_name: web
build: .
restart: always
links:
- db
ports:
- "5000:5000"
db:
image: mysql
container_name: db
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_DATABASE: "test"
MYSQL_USER: "test"
MYSQL_PASSWORD: "test"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "33061:3306"
There is a Python script for read data from a csv file and insert them to database, which works fine. Now I want to running the script once the MySQL container is set up. (I have done connection with Python and MySQL in container)
Otherwise, anyone has a better solution to import existing data?
MySQL docker image has the ability to execute shell scripts or sql files if these script/sql files mounted under /docker-entrypoint-initdb.d for a running container as described in here and here. So I suggest you to write an SQL file that reads the CSV file (which you should mount to your container so the sql file can read it) in order to restore it to MySQL maybe something similar to this answer or write a bash script to import csv into mysql whatever works for you.
You can check Initializing a fresh instance at the official dockerhub page for mysql
From Dockerfile, you can call a script (Entrypoint). In this script you can call your python script. For example:
DockerFile:
FROM php:7.2-apache
RUN apt-get update
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
This will run your entrypoint script in the App container. Make sure you've depends on attribute in you app container compose description.

Hotreloading discord python bot in Docker

I want to dockerize my discord bot written in python for developement process but I can't get it done. In docker-compose it is like that right now:
discord_bot:
build: ./discord
volumes:
- ./discord:/usr/src/discord
depends_on:
- mongo
- node
Is there a way I can hot reload this code while still using discord.py?
If you want it to auto-reload on code change for local development, what you have is mostly correct. The one thing you're missing is launching the main process via some sort of file watcher. You could use nodemon with python, or find some equivalent for python specifically.
Changes you need to make:
You're build image needs to contain some sort of file watcher. You could use Nodemon for this (even for python , or use some python equivalent)
You should override the default command of the image to launch via your file watcher.
discord_bot:
build: ./discord <--- Should include file watcher executable (nodemon or some python equivalent)
command: nodemon /usr/src/discord/index.js <--- add this line
volumes:
- ./discord:/usr/src/discord
depends_on:
- mongo
- node

How to run Odoo tests unittest2?

I tried running odoo tests using --test-enable, but it won't work. I have a couple of questions.
According to the documentation Tests can only be run during module installation, what happens when we add functionality and then want to run tests?
Is it possible to run tests from IDE like Pycharm ?
This useful For Run odoo test case:
./odoo.py -i/-u module_being_tested -d being_used_to_test --test-enable
Common options:
-i INIT, --init=INIT
install one or more modules (comma-separated list, use "all" for all modules), requires -d
-u UPDATE, --update=UPDATE
update one or more modules (comma-separated list, use "all" for all modules). Requires -d.
Database related options:
-d DB_NAME, --database=DB_NAME
specify the database name
Testing Configuration:
--test-enable: Enable YAML and unit tests.
#aftab You need add log-level please see below.
./odoo.py -d <dbname> --test-enable --log-level=test
and regarding you question, If you are making changes to installed modules and need to re test all test cases then you need to simple restart you server with -u <module_name> or -u all(for all modules) with the above command.
Here is a REALLY nice plugin to run unit odoo tests directly with pytest:
https://github.com/camptocamp/pytest-odoo
Here's a result example:
I was able to run odoo's tests using pycharm, to achieve this I used docker + pytest-odoo + pycharm (using remote interpreters).
First you setup a Dockerfile like this:
FROM odoo:14
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip
RUN pip3 install pytest-odoo coverage pytest-html
USER odoo
And a docker-compose.yml like this:
version: '2'
services:
web:
container_name: plusteam-odoo-web
build:
context: .
dockerfile: Dockerfile
image: odoo:14
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
command: --dev all
db:
container_name: plusteam-odoo-db
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
So we extend an odoo image with packages to generate coverage reports and pytest-odoo
Once you have this, you can run docker-compose up -d to get your odoo instance running, the odoo container will have pytest-odoo installed, the next part is to tell pycharm to use a remote interpreter with the odoo modified image including the pyest-odoo package:
Now every time you run a script in pycharm they will launch a new container based on the image you provided.
After examining the containers launched by pycharm I realized they bind the project's directory to the /opt/project/ directory inside the container, this is useful because you will need to modify the odoo.conf file when you run your tests.
You can customize the database connection for a custom testing db which you should do, and the important part is that you need to map the addons_path option to /opt/project/addons or the final path inside the containers launched by pycharm where your custom addons are available.
With this you can create a pycharm script for pytest like this:
Notice how we provided the path for the odoo config with modifications for testing, this way the odoo available in the container launched by pycharm will know where your custom addon's code is located.
Now we can run the script and even debug it and everything will work as expected.
I go further in this matter (my particular solution) in a medium article, I even wrote a repository with a working demo so you can try it out, hope this helps:
https://medium.com/plusteam/how-to-run-odoo-tests-with-pycharm-51e4823bdc59 https://github.com/JSilversun/odoo-testing-example
Be aware that using remote interpreters you just need to make sure the odoo binary can find the addons folder properly and you will be all set :) besides using a Dockerfile to extend an image helps to speed up development.

Categories