Docker Compose: Allowing Network Interactions Between Services - python

I'd like to have two Docker containers, which are defined in the same docker-compose.yaml file to be able to share a network and interact with each others' exposed ports. I'm running all of this on Docker for Mac.
In order to do so, I've set up a couple docker containers that are running a tiny Flask server which can either return a "Hello" or make a request to another server (see below for details). So far, I've been unable to allow the two apps to communicate with each other.
What I've tried so far:
exposeing the relevant ports
publishing the ports and mapping them 1:1 with the host
For flask using both localhost and 0.0.0.0 as the --host arg
curl from one container to another (using both localhost:<other_container_port> and 0.0.0.0:<other_container_port>
Using the implicit network as per the docs
Explicit network definition
All of the above examples give me a Connection Refused error, so I feel like I'm missing something basic about Docker networking.
The Networking in Compose doc mentions the following:
When you run docker-compose up, the following happens:
...
A container is created using db’s configuration. It
joins the network myapp_default under the name db.
And their example appears to have all the separate services be able to communicate without any network definitions, which leads me to believe that I probably should not need to define a network either.
Below is my docker-compose.yaml file - all the files can be found at this gist:
version: '3'
services:
receiver:
build: ./app
# Tried with/without expose
expose:
- 3000
# Tried with/without ports
ports:
- 3000:3000
# Tried with/without 0.0.0.0
command: "--host 0.0.0.0 --port 3000"
# Tried with/without explicit network
networks:
- mine
requester:
build: ./app
expose:
- 4000
ports:
- 4000:4000
# This one's ip is 0.0.0.0, so we can access from host
command: "--host 0.0.0.0 --port 4000"
networks:
- mine
networks:
mine: {}
The app.py file:
#app.route("/")
def hello():
return "Hello from {}".format(request.host)
#app.route("/request/<int:port>")
def doPing(port):
location = "http://localhost:{}/".format(port)
return requests.get(location, timeout=5).content

in docker-compose the services that are on same network can access each other by its name, you dont even have to expose the ports to host. so your docker-compose.yaml can be simplified to:
version: '3'
services:
receiver:
build: ./app
command: "--host 0.0.0.0 --port 3000"
requester:
build: ./app
command: "--host 0.0.0.0 --port 4000"
and inside the container requester you can access the other one with
ping receiver
that resolves the name and you can verify the port is also open, for example with netcat
nc -z receiver 3000 -v

Related

Connect to a python socket inside a docker-compose mediated docker from host

I am trying to create a python socket inside a docker and forward that port to its host machine, to where some other programs should be trying to connect.
For this, the dockerised running python is doing as follows:
# Python 3.8, inside docker
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(('0.0.0.0', 5000))
while True:
message, addrs = s.recvfrom(1480)
_log(f"DATA RECIEVED from {addrs}") # arbitrary logging function
Due to others programs not knowing in advance which range of ip's will be decided for the dockers network, i have to trust into docker's ports comunication. So, what i would like to achive is that, when the host is recievin any socket doing a connect in its 5000 port, it would be picked by the dockerised python.
Both server and dockerised image are ubuntus. The ip of the host in the local network is 192.168.1.141, the docker subnet is in 172.18.0.0, and docker-compose for the docker itself looks as follows:
docker-compose file
version: "3"
services:
my_docker:
container_name: python_socket
image: registry.gitlab.com/my_group/my_project:latest
volumes:
- ./configs:/configs
- ./logs:/configs/logs
- ./sniffing:/app/sniffing
ports:
- "5000:5000"
- "3788:3788"
- "3787:80"
networks:
- app_network
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- rabbitmq
- backend
restart: always
networks:
app_network:
With this configuration, I am not being able to connect. I've seen that, in the host, if I launch an Ipython and try the following:
# Python 3.8, ipython console
data = "Hello"
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('172.18.0.2', 5000))
s.send(data.encode())
Then this info reaches the docker successfully. But if i try to connect to either 0.0.0.0:5000, localhost:5000 or 192.168.1.141:5000, then I only get connection errors. The docker is getting the information from outside the machine, either.
What am I missing? I have web servers simmilarly configured where the docker port forwarding is successful.
Thanks in advance for your time and help.
You're creating a UDP socket (SOCK_DGRAM). Compose ports: is somewhat quiet on this feature, but the general Container networking overview implies that port forwarding defaults to TCP only.
You can explicitly specify you need to forward a UDP port:
ports:
- '5000:5000/udp'
Then you can connect to the host's DNS or IP address (or localhost or 127.0.0.1, if you're calling from outside a container but on the same host) and the first port number 5000. Do not look up the container-private IP address; it's unnecessary and doesn't work in most contexts.

How to expose Odoo container to LAN

I am currently trying to run a docker Odoo container and expose it to my local network so my team can start testing it out, but I can't access the container from another computer on the same network. How can I host odoo on a windows docker machine that will let my co-workers access and work with Odoo?
You simply need to expose the port that your odoo web service is running at. From the official Odoo docker hub repository:
version: '2'
services:
web:
image: odoo:12.0
depends_on:
- db
ports:
- "8069:8069"
db:
image: postgres:10
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
Or without docker-compose you could use e.g.
docker run -p 8069:8069 --name odoo --link db:db -t odoo -- --db-filter=odoo_db_.*
If you want to access the internal port 8069 from external port 80, you can simply change to port mapping to 80:8069.
Afterwards odoo can be accessed with a browser at [your-ip]:8069 or simply [your-ip] if you map the external port to 80.

Flask and selenium-hub are not communicating when dockerised

I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address

how to connect python app in docker container with running docker container with url

I have an app in python that I want to run in a docker container and it has a line:
h2o.connect(ip='127.0.0.1', port='54321')
The h2o server is running in docker container and it always has different ip. One time it was started on 172.19.0.5, the other time 172.19.0.3, sometimes 172.17.0.3.
So it is always random, and I can't connect the python app.
I tried to expose the port of h2o server to localhost and then connect the python (the code above), but it is not working.
You dont connect two docker containers though ip addresses. Instead, you want to use docker internal network aliases:
version: '3'
services:
server:
...
depends_on:
- database
database:
...
expose:
- 54321:54321
then you can define your connectio in server as:
h2o.connect(ip='127.0.0.1', port='54321')

Docker cannot connect application to MySQL

I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.

Categories