I have used the following commands to create a Docker image with Postgres running on it:
docker pull postgres
docker run --name test-db -e POSTGRES_PASSWORD=my_secret_password -d postgres
I then created a table called test and inserted some random data into a couple of rows.
I am now trying to make a connection to this database table through psycopg2 in Python on my local machine.
I used the command docker-machine ip default to find out the IP address of the machine as 192.168.99.100 and am using the following to try and connect:
conn = psycopg2.connect("dbname='test-db' user='postgres' host='192.168.99.100' password='my_secret_password' port='5432'")
This is not working with the error message of "OperationalError: could not connect to server: Connection refused (0x0000274D/10061)"
.
.
Everything seems to be in order so I can't think why this would be refused.
According to the documentation for this postgres image, (at https://hub.docker.com/_/postgres/) this image includes EXPOSE 5432 (the postgres port) and the default username is postgres.
I also tried to get the IP address of the image itself with docker inspect test-db | grep IPAddress | awk 'print{$2}' | tr -d '",' that I found on SA to a slightly related article, but that IP address didn't work either.
The EXPOSE instruction may not be doing what you expect. It is used for links and inter-container communication inside the Docker network. When connecting to a container from outside the Docker bridge network you need to publish to port with -p. Try adding -p 5432:5432 to your docker run command so that it looks like:
docker run --name test-db -e POSTGRES_PASSWORD=my_secret_password -d -p 5432:5432 postgres
Here is a decent explanation of the differences between publish and exposed ports: https://stackoverflow.com/a/22150099/684908. Hope this helps!
Related
I've been trying to configure my m1 to work with an older ruby on rails api and I think in the process I've broken my ability to connect any of my python apis to their database images in docker running locally.
When I run:
psql -U dev -h localhost database
Instead of the lovely psql blinking cursor allowing me to run any sql statement I'd like I get this error message instaad:
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: database "dev" does not exist
I've tried docker-compuse up and down and force recreating and brew uninstalling postgres and reinstalling postgres via brew. I've downloaded the postgres.app dmg and made sure to change it to a different port hoping that that would trigger the steps needed just for psycopg2 to connect to the docker image.
the docker-compose.yaml looks like this:
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
POSTGRES_USER: dev
POSTGRES_HOST_AUTH_METHOD: trust
networks:
default:
aliases:
- postgres
ports:
- 5432:5432
What am I missing and what can I blame ruby on rails for (which works by the way) 🤣
I think it's just docker configuration you need to update
First of all check your existing services in your local machine if the port is used by any other services (Mostly likely ylocal postgres server).
next step is to change your yaml file as below
services:
db:
image: REDACTED
container_name: db_name
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
ports:
- 5434:5432
after that you can connect with following command in your cmd
psql -U postgres -h localhost -p 5434
assuming that you have separate yaml file for your python application
if you merge your python code in same yaml file then your connection string will be your service name (db in your case) and port will be 5432
So the answer is pretty simple. What was happening was that I had a third instance of postgres running on my computer that I had not accounted for which was the brew version. Simply running brew services stop postgres and later brew uninstall postgres fixed all my problems with being able to have my ruby on rails api work which rely on "postgres native" on my mac (protip, I changed this one to use port 5431) and my python api work which use a containerized postgres on port 5432 without any headaches. During some intial confusion during my Ruby on Rails setup which required me getting Ruby 2.6.7 running on an m1 mac I must have installed postgres via brew in an attempt to get something like db:create to work.
Any ideas why I get a connection error when trying to run a redis command from one container to another?
I'm trying to run an old python 2.7 lambda locally in order to test before upgrading to 3.8.10
I'm running a script that builds and links redisdb container with my_app container:
docker build . -t my_app:latest
docker pull redis:3.2.4-alpine && \
docker run --name=redisdb -d redis:3.2.4-alpine redis-server
docker run \
--rm -t -i -d \
--link redisdb:redis \
-h redis -p 6379 \
-e CONFIG_FILE=local_config.yaml \
my_app
I am running a python script on my_app with the following commands:
r = redis.Redis()
r.mset({"Bahamas": "Nassau"})
print(r.get("Bahamas"))
I've also tried the top suggestion here: Error 99 connecting to localhost:6379. Cannot assign requested address and passed r = redis.Redis(host='localhost', port=6379, decode_responses=True) with the same results.
Every time I try to run that python script in the my_app container, I get:
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
When I run the containers and then check docker ps I get this:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76aea9308821 mar-atlas-readings-processor "python2" About a minute ago Up About a minute 0.0.0.0:64098->6379/tcp mar-atlas-readings-processor
f313113341de redis:3.2.4-alpine "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp redisdb
This shows the correct port connections.
Your python app is trying to talk to Redis on localhost but the Redis container is on a different localhost.
You should be able to use the link configured in the docker run ... command, i.e redisdb, for the host option.
I am using mysql-connector, when ever I run the container using docker I get this error:
mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'db:3306' (-5 No address associated with hostname)
but when I run the project using python only it executes with no errors
I want to use phpmyadmin only for the database please help.
To create a docker from your linux machine:
docker pull mysql:latest
To run it and mount a persistent folder with port access on 3306 (std port for mysql):
docker run --name=mysql_dockerdb --env="MYSQL_ROOT_PASSWORD=<your_password>" -p 3306:3306 -v /home/ubuntu/sql_db/<your_dbasename>:/var/lib/mysql -d mysql:latest
To connect to the docker instance so that you can create the database within the docker:
docker exec -it mysql_dockerdb mysql -uroot -p<your_password>
My SQL code to establish the database:
CREATE DATABASE dockerdb;
CREATE USER 'newuser'#'%' IDENTIFIED BY 'newpassword';
GRANT ALL PRIVILEGES ON dockerdb.* to 'newuser'#'%';
ALTER USER 'username'#'%' IDENTIFIED WITH mysql_native_password BY 'userpassword';
You will now have a docker running with a persistent SQL database. You connect to it from your Python code. I am running Flask mySql. You will want to keep your passwords in environment variables. I am using a Mac so therefore my ~/.bash_profile contains:
export RDS_LOGIN="mysql+pymysql://<username>:<userpassword>#<dockerhost_ip>/dockerdb"
Within Python:
import os
SQLALCHEMY_DATABASE_URI = os.environ.get('RDS_LOGIN')
And at that point you should be able to connect in your usual Python manner. Note that I've glossed over any security aspects on the presumption this is local behind a firewall.
I am using pony.orm to connect to mysqldb using a python code:
db.bind(provider='mysql', user=username, password=password, host='0.0.0.0', database=database)
And when I write the docker compose file:
db:
image: mariadb
ports:
- "3308:3306"
environment:
MYSQL_DATABASE: db
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: ''
How can I pass the hostname to the python program by giving a value (in environment:) in the docker-compose.yml file ?
If I pass the value there can I access the value through os.environ['PARAM'] in the Python code?
Because you've named your service db in the docker-compose.yaml, you can use that as the host, provided you are on the same network:
db.bind(provider='mysql', user=username, password=password, host='db', database=database)
To ensure you are on that network, in your docker-compose.yaml, at the bottom, you'll want:
networks:
default:
external:
name: <your-network>
And you'll need to create that network before running docker-compose up
docker network create <your-network>
This avoids the need for an environment variable, as the container name will be added to the routing table of the network.
You don't need to define your own network, as docker-compose will handle that for you, but if you prefer to be a bit more explicit, it allows you the flexibility to do so. Normally, you would reserve this for multiple compose solutions that you wanted to join together on a single network, which is not the case here.
It's handled in docker-compose the same way you would do it in vanilla docker:
docker run -d -p 3308:3306 --network <your-network> --name db mariadb
docker run -it --network <your-network> ubuntu bash
# in the shell of the ubuntu container
apt-get update && apt-get install iputils-ping -y
ping -c 5 db
# here you will see the results of ping reaching container db
5 packets transmitted, 5 received, 0% packet loss, time 4093ms
Edit
As a note, per #DavidMaze's comment, the port you will be communicating with is 3306, since that's the port that the container is listening on, not 3308.
I'm new to docker, redis and any kind of networking, (I know python at least!). Firstly I have figured out how to get a redis docker image and run it in a docker container:
docker run --name some-redis -d redis
As I understand this redis instance has port 6379 available to connect to other containers.
docker network inspect bridge
"Containers": {
"2ecceba2756abf20d5396078fd9b2ecf0d60ab04ca6b8df5e1b631b6fb5e9a85": {
"Name": "some-redis",
"EndpointID": "09f0069dae3632a2456cb4d82ad5e7c9782a2b58cb7a4ee655f57b5c410c3e87",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
If I run the following command I can interact with the redis instance and generate key:value pairs:
docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
set 'a' 'abc'
>OK
get 'a'
>"abc"
quit
I have figured out how to make and run a docker container with the redis library installed that will run a python script as follows:
Here is my Dockerfile:
FROM python:3
ADD redis_test_script.py /
RUN pip install redis
CMD [ "python", "./redis_test_script.py" ]
Here is redis_test_script.py:
import redis
print("hello redis-py")
Build the docker image:
docker build -t python-redis-py .
If I run the following command the script runs in its container:
docker run -it --rm --name pyRed python-redis-py
and returns the expected:
>hello redis-py
It seems like both containers are working ok, the problem is connecting them both together, I would like to ultimately use python to perform operation on the redis container. If I modify the script as follows and rebuild the image for the python container it fails:
import redis
print("hello redis-py")
r = redis.Redis(host="localhost", port=6379, db=0)
r.set('z', 'xyz')
r.get('z')
I get several errors:
...
OSError: [Errno 99] Cannot assign requested address
...
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
.....
It looks like they're not connecting, I tried again using the bridge IP in the python script:
r = redis.Redis(host="172.17.0.0/16", port=6379, db=0)
and get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.0/16:6379. Name or service not known.
and I tried the redis sub IP:
r = redis.Redis(host="172.17.0.2/16", port=6379, db=0)
and I get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.2/16:6379. Name or service not known.
It feels like I'm fundamentally misunderstanding something about how to get the containers to talk to each other. I've read quite a lot of documentation and tutorials but as I say have no networking experience and have not previously used docker so any helpful explanations and/or solutions would be really great.
Many thanks
That's all about Docker networking. Fast solution - use host network mode for both containers. Drawback is low isolation, but you will get it working fast:
docker run -d --network=host redis ...
docker run --network=host python-redis-py ...
Then to connect from python to redis just use localhost as a hostname.
Better solution is to use docker user-defined bridge network
# create network
docker network create foo
docker run -d --network=foo --name my-db redis ...
docker run --network=foo python-redis-py ...
Note that in this case you cannot use localhost but instead use my-db as a hostname. That's why I've used --name my-db parameter when starting first container. In user-defined bridge networks containers reach each other by theirs names.
Do:
Explicitly create a Docker network for your application, and run your containers connected to that network. (If you use Docker Compose, this happens for you automatically and you don’t need to do anything.)
docker network create foo
docker run -d --net foo --name some-redis redis
docker run -it --rm --net foo --name pyRed python-redis-py
Use containers’ --name as DNS hostnames: you connect to some-redis:6379 to reach the container. (In Docker Compose the name of the service block works too.)
Make the locations of external services configurable, most likely using an environment variable. In your Python code you can connect to
redis.Redis(host=os.environ.get("REDIS_HOST", "localhost"),
port=int(os.environ.get("REDIS_PORT", "6379"))
docker run --rm -it \
--name py-red \
--net foo \
-e REDIS_HOST=some-redis \
python-redis-py
Don’t:
docker inspect anything to find the container-private IP addresses. Between containers you can always use hostnames as described above. The container-private IP addresses are unreachable from other hosts, and may even be unreachable from the same hosts on some platforms.
Use localhost in Docker for anything, expect the specific case of connecting from a browser or other process running directly on the host (not in a container) to a port you’ve published with docker run -p on the same host. (It generally means “this container”.)
Hard-code host names in your code like this; it makes it hard to run the service in a different environment. (For databases in particular it’s not uncommon to run them outside of Docker or even in a hosted cloud service.)
Use --link, it’s outdated and unnecessary.