Access mariadb docker-compose container from host-machine - python

I'm trying to access a mariadb-container from a python script on my host-machine (MacOS).
I tried all network_modes (host, bridge, default), but nothing works.
I was able to connect to the container through phpmyadmin, but only if both containers are in the same docker-compose-network.
Here is my docker-compose.yml with the attempt on network_mode host:
version: '3.9'
services:
mariadb:
image: mariadb:10.9.1-rc
container_name: mariadb
network_mode: bridge
ports:
- 3306:3306
volumes:
- ...
environment:
- MYSQL_ROOT_PASSWORD=mysqlroot
- MYSQL_PASSWORD=mysqlpw
- MYSQL_USER=test
- MYSQL_DATABASE=test1
- TZ=Europe/Berlin
phpmyadmin:
image: phpmyadmin:5.2.0
network_mode: bridge
container_name: pma
# links:
# - mariadb
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
- TZ=Europe/Berlin
ports:
- 8081:80
Any tips on how I get access to the container through the python mariadb package?
Thanks!

Every thing seems okay, just check the params when trying to connect to the db:
host: 0.0.0.0
port: 3306 (as in the docker-compose)
user: test (as in the docker-compose)
password: mysqlpw (as in the docker-compose)
database: test1 (as in the docker-compose)
example:
db = MySQLdb.connect("0.0.0.0","test","mysqlpw","test1")

Related

(pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql' ([Errno -3] Temporary failure in name resolution

I am trying to Dockerize a FastAPI app that uses MYSQL and Seleniun.
I am having issues with connecting MYSQL with the FASTAPI app in the Docker.
I have tried to establish connection with MYSQL container using MYSQL Workbench which worked well using 'localhost' as the host. However, when I try to run the fastapi container which should connect with MySqL database, I am having this error:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'mysql' ([Errno -3] Temporary failure in name resolution
Here is docker-compose.yml:
version: '3'
services:
chrome:
build: .
image: selenium/node-chrome:3.141.59-20210929
ports:
- "4444:4444"
- "5900:5900"
volumes:
- "/dev/shm:/dev/shm"
networks:
- selenium
mysql:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_DATABASE=autojob
- MYSQL_USER=user
- MYSQL_PASSWORD=4444
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- ./init:/docker-entrypoint-initdb.d
- autojob:/var/lib/mysql
ports:
- "3307:3306"
expose:
- "3307"
app:
build: .
restart: on-failure
container_name: "autojobserve_container"
command:
uvicorn autojobserve.app:app --host 0.0.0.0 --port 8000 --reload
ports:
- 8000:8000
volumes:
- "./:/app"
networks:
- selenium
depends_on:
mysql:
condition: service_healthy
volumes:
autojob: {}
networks:
selenium:
Here is the line that connects with MYSQL in FastAPI:
engine = create_engine("mysql+pymysql://user:4444#mysql:3307/autojob")
In DockerDesktop, it shows that Mysql container is ready for connection too:
2022-11-08T11:49:26.334069Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2022-11-08T11:49:26.334869Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
2022-11-08 11:49:14+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
2022-11-08 11:49:14+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2022-11-08 11:49:14+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
'/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
What possibly could be wrong?
Note: Everything works well before dockerizing.
Your app container declares networks: [selenium]. The mysql container doesn't have a networks: block at all, so Compose automatically inserts networks: [default]. Since the two containers aren't on the same Docker network they can't communicate with each other, and one of the ways you see that is with the DNS-resolution issue you're seeing.
The setup I'd recommend here is to delete all of the networks: blocks in the whole file. Compose will automatically create the default network and attach containers to it, and for most applications this is a correct setup.
(You also do not need the obsolete expose: option, or to manually specify container_name:. You should not need to use volumes: to inject code into your container or command: either, the code and its default command should generally be specified in the Dockerfile.)

Docker Traefik fails to route websocket

I have several micro-services running as docker containers. All web services work fine and route correctly.
The only issue is the websocket service.
The websocket service itself is using python websockets and has it's own TLS certificates.
Trying to access the websocket with wss://websocket.localhost fails, in the setup below it doesn't find the page at all.
In my previous configurations, it results in the Bad Gateway error.
Apparently traefik comes out of the box working with websockets with no additional configurations.
This doesn't seem to be the case. Any pointers?
The websocket connection works without docker or traefik involved, so I ruled that issue out.
Any help on this would be extremely appreciated.
docker-compose.yml
version: "3.7"
networks:
web:
external: true
internal:
external: false
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.localhost`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)
traefik.yml
entryPoints:
insecure:
address: :80
http:
redirections:
entryPoint:
to: secure
scheme: https
secure:
address: :443
log:
level: INFO
accessLog:
filePath: "traefik-access.log"
bufferingSize: 100
api:
dashboard: true
insecure: true
ping: {}
providers:
file:
filename: /config/dynamic.yml # traefik dynamic configuration
watch: true # everytime it changes, it will be reloaded
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: true
config
tls:
stores:
default:
defaultCertificate:
certFile: cert.crt
keyFile: key.key
certificates:
- certFile: crt.crt
keyFile: key.key
stores:
- default
domains:
- main: "localhost"
while looking at your configuration, the following doesn't fit:
The docker-compose projectname will be part of the domain names. The default is to use the parent folder name of your docker-compose.yaml. You didn't specify it here, therefore I assume it to by traefik. You can set this explicitly in the docker-compose call with docker-compose -p traefik up or by setting the env variable PROJECT_NAME.
you are using the domain name '.localhost', but you don't define the domainname explicitly. That means the default name is used which is derived from the service name, the project name (the folder where is docker-compose file is stored), and the docker-network name that you attach to with this pattern: servicename.projectname_networkname.
Use the attributes hostname and domainname to explicitly define a name (only works for networks with internal=false).
When having two network connections and additionally a domainname definition you get the following domain names:
db.traefik_internal (only intern, db.localhost will not work)
dozzle.traefik_internal (only intern, dozzle.localhost will not work)
traefik.localhost
traefik.traefik_web
traefik.traefik_internal
websocket.localhost
websocket.traefik_web
websocket.traefik_internal
external=true just means that the network is created externally by docker network create or by another docker-compose project. The main effect is, that it is not delected when doing docker-compose down. It has nothing to do with the connection to the outside world.
To get an isolated internal network you have to use the option internal: true
the option condition: service_healthy is no longer supported for version: "3.7", so either remove that option (it nevertheless doesn't work like you expect) or change the version to 2.4
Here my current version of the docker-compose.yaml:
version: "2.4"
networks:
web:
internal:
internal: true
volumes:
mysql_data:
services:
traefik:
image: traefik:v2.2.1
container_name: traefik
hostname: traefik
domainname: localhost
restart: always
ports:
- "80:80"
- "443:443"
expose:
- 8080
environment:
- /var/run/docker.sock:/var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/:/config
- ./traefik.yml:/traefik.yml
networks:
- web
- internal
labels:
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.entrypoints=secure
- traefik.http.routers.traefik.rule=Host(`traefik.localhost`)
- traefik.http.routers.traefik.service=api#internal
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
hostname: dozzle
domainname: localhost
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
labels:
- traefik.http.routers.dozzle.tls=true
- traefik.http.routers.dozzle.entrypoints=secure
- traefik.http.routers.dozzle.rule=Host(`dozzle.traefik_internal`) || Host(`logs.localhost`)
networks:
- internal
db:
image: mysql:latest
container_name: db
hostname: db
domainname: localhost
environment:
MYSQL_ROOT_PASSWORD: ########
restart: always
healthcheck:
test: "exit 0"
command: --default-authentication-plugin=mysql_native_password
ports:
- '3306:3306'
volumes:
- mysql_data:/var/lib/mysql
networks:
- internal
websocket:
image: local-websocket-image
container_name: websocket-stage
hostname: websocket
domainname: localhost
restart: on-failure
command: python server.py
depends_on:
db:
condition: service_healthy
expose:
- 8080
networks:
- web
- internal
environment:
- PATH_TO_CONFIG=/src/setup.cfg
volumes:
- ${PWD}/docker-config:/src
- ${PWD}/config/certs/socket:/var
labels:
- traefik.http.routers.core-socket-stage-router.tls=true
- traefik.http.routers.core-socket-stage-router.entrypoints=secure
- traefik.http.routers.core-socket-stage-router.rule=Host(`websocket.localhost`)

Docker Network Host on Ubuntu

I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared

'WARNING:: Cannot resolve hostname: [Errno -2] Name or service not known' when trying to connect Zookeeper with Pysolr

I am trying to run a docker-compose that will let me have a Zookeeper ensemble that manages my SolrCloud. Everything runs and from every way I've checked inside the container, my Zookeeper ensemble appears to be up and running. Instead, everytime I try to connect I get an error that the name or service could not be found.
I've tried using different docker-compose.ymls, I've tried changing the name of my containers in docker, I've tried changing up the ports in the connection string, I've tried changing the hostname in the connection string, and I've tried the localhost for the connection string.
solr1:
container_name: solr1
image: solr:5-slim
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
volumes:
- data:/var/solr
command: >
sh -c "solr-precreate users"
solr2:
image: solr:5-slim
container_name: solr2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr3:
image: solr:5-slim
container_name: solr3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
ports:
- 8983:8983
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
zoo1:
image: zookeeper:3.4
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
zoo2:
image: zookeeper:3.4
container_name: zoo2
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
zoo3:
image: zookeeper:3.4
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
and then my Python code is
import pysolr
def connect_solrcloud():
zookeeper = pysolr.ZooKeeper("zoo1:2181,zoo2:2181,zoo3:2181")
solr = pysolr.SolrCloud(zookeeper, "users")
solr.ping()
connect_solrcloud()
I would expect that the Zookeeper object is able to connect and then I would be able to access the "users" core I created in my docker container. Instead I get an error saying
WARNING:: Cannot resolve zoo1: [Errno -2] Name or service not known
WARNING:: Cannot resolve zoo2: [Errno -2] Name or service not known
WARNING:: Cannot resolve zoo3: [Errno -2] Name or service not known
I don't know if this is a docker-compose issue or if it's the way I set Zookeeper up. It appears no one else online has a problem here. They either have problems standing Zookeeper up or some issue once it's connected.
Found my issue. My web container did not include
networks:
- solr
so it wasn't able to access zookeeper.

Can not connect PostgreSQL database from docker to python

I am trying to use Postgresql with python. I have used the following docker compose the file.
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: admin_123
POSTGRES_USER: admin
adminer:
image: adminer
restart: always
ports:
- 8080:8080
With the following code, I am trying to connect with the database.
conn = psycopg2.connect(
database = "db_test",
user ="admin",
password = "admin_123",
host = "db"
)
But I am getting this error.
OperationalError: could not translate host name "db" to address:
nodename nor servname provided, or not known
What I am doing wrong ?
You need to expose the BD port in the docker compose like this :
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: admin_123
POSTGRES_USER: admin
ports:
- "5432:5432"
And then connect with localhost:5432
Another possible scenario,
Check if ports have been used or not by other docker container.
Use command:
$ docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
Here is my docker-compose.yml
$ cat docker-compose.yml
version: '3.1' # specify docker-compose version
services:
dockerpgdb:
image: postgres
ports:
- "5432:5432"
restart: always
environment:
POSTGRES_PASSWORD: Password
POSTGRES_DB: dockerpgdb
POSTGRES_USER: abcUser
volumes:
- ./data:/var/lib/postgresql%
Now in PgAdmin4 you can setup a new server as below to test the connection:
host: localhost
port: 5432
maintenance database: postgres
username: abcUser
password: Password

Categories