Dockerfile create image with both python and mysql - python

I have two containers "web" and "db". I have an existing data file in csv format.
The problem is I can initialize the MySQL database with a schema using docker-compose or just run with parameters but how can I import the existing data? I have Python script to parse and filter the data and then insert it to db but I cannot run it in the "db" container due to the single image is MySQL.
Update1
version: '3'
services:
web:
container_name: web
build: .
restart: always
links:
- db
ports:
- "5000:5000"
db:
image: mysql
container_name: db
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_DATABASE: "test"
MYSQL_USER: "test"
MYSQL_PASSWORD: "test"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "33061:3306"
There is a Python script for read data from a csv file and insert them to database, which works fine. Now I want to running the script once the MySQL container is set up. (I have done connection with Python and MySQL in container)
Otherwise, anyone has a better solution to import existing data?

MySQL docker image has the ability to execute shell scripts or sql files if these script/sql files mounted under /docker-entrypoint-initdb.d for a running container as described in here and here. So I suggest you to write an SQL file that reads the CSV file (which you should mount to your container so the sql file can read it) in order to restore it to MySQL maybe something similar to this answer or write a bash script to import csv into mysql whatever works for you.
You can check Initializing a fresh instance at the official dockerhub page for mysql

From Dockerfile, you can call a script (Entrypoint). In this script you can call your python script. For example:
DockerFile:
FROM php:7.2-apache
RUN apt-get update
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
This will run your entrypoint script in the App container. Make sure you've depends on attribute in you app container compose description.

Related

Implement pytest over FastAPI app running in Docker

I've created FasAPI app with Postgres DB which lives in docker container.
So now I have docker-compose.yml file with my app and postgres DB:
version: '3.9'
services:
app:
container_name: app_container
build: .
volumes:
- .:/code
ports:
- '8000:8000'
depends_on:
- my_database
#networks:
# - postgres
my_database:
container_name: db_container
image: postgres
environment:
POSTGRES_NAME: dbf
POSTGRES_USER: myuser
POSTGRES_PASSWORD: password
volumes:
- postgres:/data/postgres
ports:
- '5432:5432'
restart: unless-stopped
volumes:
postgres:
And now I want to make pytest over my DB with testing endpoints and testing my DB
BUT, when I run python -m pytest cmd I got the error can not translate hostname "my_database" as in my database.py file I have to set DATABASE_URL = 'postgresql://myuser:password#my_database'. As according to userguide, when I build docker-compose file, in DATABASE_URL I must put name of service instead of hostname.
Anyone have an idea how to solve it?!!
The problem is that, if you use docker-compose to run your app in separate container and run database in another container. It is like your DB is not launched and pytest can't connect to it. This is wrong way to implement pytests in this way!!!!
To run pytest correctly you should:
You must in DATABASE_URL write the name of service instead of the name of host! In my case my_database is name of service in docker-compose.yml file, so I should set it as hostname, like: DATABASE_ULR = postgres://<username>:<password>#<name of service>
pytest must be run in app container! What it means! First of all, start your containers: docker-copose up --build where --build is optional (it just rebuilds your images if you made some changes to code in your programm files. After this, you should jump into app container. It can be done from Docker application on your computer or through the terminal. To make it with terinal window:
cmd: docker exec -it <name of container with your application>. You will dive into container and after this you can simply run cmd pytest or python -m pytest. And your tests will run as allways.
If you will have some questions you can write me anytime)))
So, the reason of this Error was that I run pytest and it tried to connect to DATABASE_URL which, em... has not been launched already (as I understand).

Custom Docker image fails on another machine (psycopg2.OperationalError: could not translate host name to address)

Absolutely new to Docker and Postgres (I know they're not related in a tight way, but please read on).
I have a simple python script (not a Django project; not a Kivy project - just a .py file). It fetches something and writes it into the Postgres db (using psycopg2). On my (Windows 10) machine, (after a million trial and errors to get this working) it works. So when I docker-compose up the whole project, it does the thing it's supposed to do, and writes it into the Postgres db. After that, when I Docker push the resulting image to the DockerHub, then Docker pull on to a totally unrelated Linux Azure VM, it fails with the following error:
Traceback (most recent call last):
File "/app/file00.py", line 19, in <module>
conn = psycopg2.connect(
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "zedb" to address: Name or service not known
zedb is the name of the Postgres database service in the Docker-compose file (I've pasted it below).
I know I've not something right, but I am not sure what it is.
DockerFile for the script (it's pretty much the default template that VSCode gives you):
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:latest
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "file00.py"]
The db part does not contain a Dockerfile, but an init.sql file that creates the table needed for the script to write into. It is mounted from local to the Postgres image from the docker-compose file. From what I understand, if the container fails/shuts down somehow, the data in the tables is retained (volume persistence) and when the container is spun up again, the table is created. Here's what in the init.sql file:
CREATE TABLE IF NOT EXISTS pt (
serial_num SERIAL,
col1 VARCHAR (40) NOT NULL PRIMARY KEY,
col2 VARCHAR (150) NOT NULL
);
I could be wrong in so many levels about all this, but there's no one to check with, and I am learning this all by myself.
Finally, here's the docker-compose file.
version: '3'
services:
zedb:
image: 'postgres'
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=user123!
- POSTGRES_DB=fkpl
- PGDATA=/var/lib/postgresql/data/db-files/
expose:
- 5432
ports:
- 5432:5432
volumes:
- ./db/:/var/lib/postgresql/data/
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
zescript:
build: ./app
volumes:
- ./app:/usr/scr/app
depends_on:
- zedb
Any help is greatly appreciated.

What is the proper way to setup a simple docker-compose configuration for testing?

My current docker-compose.yml file:
version: '2'
services:
app:
restart: always
build: ./web
ports:
- "8000:8000"
volumes:
- ./web:/app/web
command: /usr/local/bin/gunicorn -w 3 -b :8000 project:create_app()
environment:
FLASK_APP: project/__init__.py
depends_on:
- db
working_dir: /app/web
db:
image: postgres:9.6-alpine
restart: always
volumes:
- dbvolume:/var/lib/postgresql/data
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
volumes:
dbvolume:
I'm now trying to create a docker-compose-test.yml file that overrides the previous file for testing. What came to my mind was to use this:
version: '2'
services:
app:
command: pytest
db:
volumes:
- dbtestvolume:/var/lib/postgresql/data
volumes:
dbtestvolume:
And then run the tests with the command:
docker-compose -f docker-compose.yml -f docker-compose-test.yml run --rm app
that as far as I understand should override only the different aspects compared to the docker-file used for development, that is the command used and the data volume where the data is stored.
The command is successfully overridden, while unfortunately the data volume stays the same and so the data of my application get overwritten if I run my tests.
Is this the correct way to set up a docker configuration for the tests? Any suggestion about what is going wrong?
If this is not the correct way, what is the proper way to setup a docker-compose configuration for testing?
Alternative test
I tried to change my docker-compose-test.yml file to use a different service (db-test) for testing:
version: '2'
services:
app:
command: pytest
depends_on:
- db-test
db-test:
image: postgres:9.6-alpine
restart: always
environment:
POSTGRES_DB: app
POSTGRES_USER: app
POSTGRES_PASSWORD: app
What happens now is that I have data is not overwritten (so, in a way, it works, hurray!) when a run my tests, but if I try to run the command:
docker-compose down
I get this ouput:
Stopping app_app_1 ... done
Stopping app_db_1 ... done
Found orphan containers (app_db-test_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
and then the docker-compose down fails. So something is not configured properly.
Any idea?
If you don't want to persist the DB data, don't use volumes, so you will have a fresh database everytime you start the container.
I guess you need some prepopulated data in your tables, so just build a new DB image copying the data you need. The Docker file could be something like:
FROM postgres:9.6-alpine
COPY db-data/ /var/lib/postgresql/data
In case you need to update the data, mount the db-data/ using -v, change it and rebuild the image.
BTW, it would be better to use an automated pipeline to test your builds, using Jenkins, GitLab CI, Travis or whatever solution that suits you. Anyway, you can use docker-compose in your pipeline as well to keep it consistent with your local development environment.

Docker Compose Multiple Containers

I have a Python script which connects to MySql and inserts data into a database.That is the one container, I want to build. I want to build another container which will have a Python script which will connect to the database of the first container and execute some queries. I am trying to follow the documentation of Docker, however I find it difficult to make the proper yml file. Any guidance will be very helpful.
It depends on how complex is what you want to make, but the docker-compose.yml file should look similar to this:
version: '3'
services:
my_database:
image: mysql
[... MySQL configs]
my_python_container:
build: .
depends_on:
- my_database
links:
- my_database
I have not knowledge about the configuration of the MySQL database so I left that part blank ([... MySQL configs]).
The my_python_conainer is in a Dockerfile in the same folder, similar to:
FROM python
COPY script.py script.py
CMD python script.py
This should be enough to get the connection, but you have to consider in your program that the mysql hostname will be the name given to the container.

Odoo development on Docker

I'm trying to use docker for odoo module developement. I have the following docker-compose.yml file
db:
image: postgres
environment:
POSTGRES_USER: odoo
POSTGRES_PASSWORD: odoo
volumes:
- data:/var/lib/postgresql/data/
odoo:
image: odoo
links:
- db:db
ports:
- "127.0.0.1:8069:8069"
volumes:
- extra-addons:/mnt/extra-addons
command: -- --update=tutorial
The module contains only an __openerp__.py file but odoo doesn't show the changes I make to it even with --update=tutorial option
{
'name': "tutorial",
'summary': """Hello world!!""",
'description': """
This is the new description
""",
'author': "ybouhjira",
'website': "ybouhjira.com",
'category': 'Technical Settings',
'version': '0.1',
'depends': ["base"],
}
this file is in a folder named tutorial located in extra-addons, and I tried stop and starting the containers even removing and recreating them.
Like shodowsjedi already said, you need to create a __init__.py file (see module structure : https://www.odoo.com/documentation/8.0/howtos/backend.html#module-structure ).
Also, check permissions in your odoo containers, your files in the odoo volume will have uid and gid of your system (the host) in the container (that can be associated to a different user). To check this you can use docker exec :
docker exec docker_odoo_1 ls -la /mnt/extra-addons
If you don't know the docker name of your container you can retrieve it by using :
docker-compose ps
Last and probably the most important one, check odoo logs by using :
docker-compose logs
and update your module in the configuration page of Odoo (or at the startup of the server)
You have to add own config file. first in docker-compose.yml mount /etc/odoo
odoo:
image: odoo
links:
- db:db
ports:
- "127.0.0.1:8069:8069"
volumes:
- extra-addons:/mnt/extra-addons
- ./config:/etc/odoo
Then create "odoo.conf" in ./config and add configuration options like below.
[options]
addons_path = /mnt/extra-addons,/usr/lib/python2.7/dist- packages/odoo/addons
data_dir = /var/lib/odoo
auto_reload = True
restart odoo, go to debug mode then apps->update module list
If still not works, then check access rights on addons directories and check if group and others can read them
To create new module you need more then Odoo Manifest file __openerp__.py file you also need Python Descriptor file __init__.py as minimal structure, of course you need more then two file but that minimal to module to exists. Once you create a module on existing database you need call Update module List under setting to load your module correctly and then you will be able to install it.
Here the quick guide on module creation.
Here the Detail Guide on API and framework.
The --update option requires -d specifying the database name
Odoo CLI doc
Take into account that the description, icons, and version inside the manifest, not always change innmediatly. Try to shift f5 your browser, but this is not so relevant when you are developing.
Besides having as a minimum, the manifest, and init.py file, if you are using docker-compose, I recommend having a script to put down, remove and recreate your container.
./doeall
cat doeall
#!/bin/sh
docker-compose down
docker-compose rm
docker-compose up -d
docker-compose logs -f
For developing purposes, is also convenient to have db in a separated docker-compose.yml, so that you can reuse the same db container for several odoo installations.
Take a look to my docker-compose for multi-instances here:
https://github.com/bmya/odoo-docker-compose/tree/multi
anyway, if you still want to use Postgres as db together in the same docker-compose file, you have it in this other branch:
https://github.com/bmya/odoo-docker-compose/blob/uni/docker-compose.yml
Again, regarding your module:
The important thing when you are writing code is:
When you change something in the methods in python code, just restart the server.
When you change something in the model inside python restart the server and reinstall.
When you change data files (views, data, etc) just reinstall the module in order to update the data files.
this fix my problem, we need create "odoo.conf" in ./config
[options]
addons_path = /mnt/extra-addons,/usr/lib/python2.7/dist- packages/odoo/addons
data_dir = /var/lib/odoo
auto_reload = True
First of all create a directory with the docker-compose.yml file and these directories:
/addons
/volumes/odoo/sessions
/volumes/odoo/filestore
/docker-compose.yml
Put this code in your docker-compose.yml file :
version: '3'
services:
web:
image: odoo:12.0
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./volumes/odoo/filestore:/opt/odoo/data/filestore
- ./volumes/odoo/sessions:/opt/odoo/data/sessions
- ./addons:/mnt/extra-addons
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
Then in a terminal write for build your environnement:
docker-compose up
docker-compse start or docker-compose stop
If you want to add custom module , just put it in addons directory then clic on update app list in App module, restart docker , after this disable all filters in search bar. Normally if you write module name in search bar your custom module will show below.
My docker-compose file support run Odoo 15 on Docker:
version: '3'
services:
postgres:
image: postgres:13
container_name: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data
volumes:
- ./data/postgres:/var/lib/postgresql/data
odoo:
image: odoo:15
container_name: odoo
restart: always
depends_on:
- postgres
ports:
- "8069:8069"
- "8072:8072"
environment:
HOST: postgres
USER: ${POSTGRES_USER}
PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- ./etc/odoo:/etc/odoo
- ./data/addons:/mnt/extra-addons
- ./data/odoo:/var/lib/odoo

Categories