I am trying to connect the application (which is not running on docker)
i am trying to run this docker image with the help of docker compose.
i am using host network mode connecting external services on
host.docker.internal on port 7497
i am trying to call from the docker container from the python code
this docker is not having port config
services:
ibkr-bot-eminisp500:
container_name: ibkr-bot-eminisp500
image: |my-image|
network_mode: host
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- IBKR_CLIENT_URL_KEY= "host.docker.internal"
- IBKR_PORT_KEY=7497
but i am getting following error. what i am missing
| API connection failed: gaierror(-2, 'Name or service not known')
ibkr-bot-eminisp500 | Traceback (most recent call last):
ibkr-bot-eminisp500 | File "/usr/bin/src/app/main.py", line 8, in <module>
ibkr-bot-eminisp500 | ibkrBot = IBKRBot()
Combining host.docker.internal with network_mode: host doesn't make any sense.
If you're running under Linux, then when using network_mode: host your container is running your host's main network environment. Drop the extra_hosts section from your config because it isn't doing you any good. You can connect to a service on your host using any ip address from any host interface, including 127.0.0.1.
If you are running on anything other than Linux, then network_mode: host is probably never useful (because the Docker "host" is actually a virtual machine running on top of your primary operating system). In this case, drop network_mode: host from your config, and connect using host.docker.internal.
Related
I am trying to create 2 containers as per the following docker-compose.yml file. The issue is that if I start up the mongo database container and then run my code locally (hitting 127.0.0.1) then everything is fine but if I try and run my api container and hit that (see yml file) then I get connection refused i.e.
172.29.0.12:27117: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id:
60437a460a3e0fa904650e35, topology_type: Single, servers: [<ServerDescription ('172.29.0.12', 27117) server_type:
Unknown, rtt: None, error=AutoReconnect('172.29.0.12:27117: [Errno 111] Connection refused')>]>
Please note: I have set mongo to use port 27117 rather than 27017
My app is a Python Flask app and I am using PyMongo in the following manner:
try:
myclient = pymongo.MongoClient('mongodb://%s:%s#%s:%s/%s' % (username, password, hostName, port, database))
mydb = myclient[database]
cursor = mydb["temperatures"]
app.logger.info('Database connected to: ' + database)
except:
app.logger.error('Error connecting to database')
What's driving me mad is it runs locally and successfully accesses mongo via the container, but as soon as I try the app in a container it fails.
docker-compose.yml as follows:
version: '3.7'
services:
hotbin-db:
image: mongo
container_name: hotbin-db
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- '27117:27017'
expose:
# Opens port 3306 on the container
- '27117'
command: [--auth]
environment:
MONGO_INITDB_ROOT_USERNAME: ***
MONGO_INITDB_ROOT_PASSWORD: ***
MONGO_INITDB_DATABASE: ***
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
# Where our data will be persisted
volumes:
- /home/simon/mongodb/database/hotbin-db/:/data/db
#- my-db:/var/lib/mysql
# env_file:
# - .env
networks:
hotbin-net:
ipv4_address: 172.29.0.12
hotbin-api:
image: scsherlock/compost-api:latest
container_name: hotbin-api
environment:
MONGODB_DATABASE: ***
MONGODB_USERNAME: ***
MONGODB_PASSWORD: ***
MONGODB_HOSTNAME: 172.29.0.12
MONGODB_PORT: '27117'
depends_on:
- hotbin-db
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- '5050:5050'
expose:
- '5050'
networks:
hotbin-net:
ipv4_address: 172.29.0.13
# # Names our volume
volumes:
my-db:
networks:
hotbin-net:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.29.0.0/16
Using the service name of the mongo container and the standard port of
27017 instead of 27117 (even though that's what is defined in the
docker-compose file) works. I'd like to understand why though
Your docker compose file does NOT configure MongoDB to run on port 27117. If you want to get it to run on 27117 you would have to change this line in the docker compose:
command: mongod --auth --port 27117
As you haven't specified a port, MongoDB will run on the default port 27017.
Your expose section exposes the container port 27117 to the host, but Mongo isn't running on that port, so that line is effectively doing nothing.
Your ports section maps a host port 27117 to a container port 27017. This means if you're connecting from the host, you can connect on port 27117, but that is connecting to port 27017 on the container.
Now to your python program. As this is running in the container network, to connect services within a docker-compose network, you reference them by their service name.
Putting this together, your connection string will be: mongodb://hotbin-db:27017/yourdb?<options>
As others have mentioned, you really don't need to create specific IP addresses unless you have a very good need to. You also don't even to define a network, as docker-compose creates it's own internal network.
Reference: https://docs.docker.com/compose/networking/
Are you using Windows to run the container?
If yes,localhost is identified as localhost of the container and not the localhost of your host machine.
Hence, instead of providing the IP address of your host, try modifying your mongodB string this way when running inside the docker container:
Try this:
mongodb://host.docker.internal:27017/
instead of:
mongodb://localhost:27017/
I had created a docker-container of python application where the code in it tries to connect to remote HBase cluster hosted on Cloudera.
Docker is running fine,except that, it is not doing read/write operation on remote HBase.
Here is my part of docker-compose.yml file
version: '2'
services:
app:
build: .
command: python3 app.py
networks:
- default
ports:
- "8007:8007"
Suggestions are welcomed.
Solved this issue ,this is because at remote HBase-cluster, thrift server was not accessible by docker.
Whitelisting my docker IP at HBase-cluster solved the issue.
I am having issues with getting data back from a docker-selenium container, via a Flask application (also dockerized).
When I have the Flask application running in one container, I get the following error on http://localhost:5000, which goes to the selenium driver using a Remote driver that is running on http://localhost:4444/wd/hub
The error that is generated is:
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>
I have created a github repo with my code to test, see here.
My docker-compose file below seems ok:
version: '3.5'
services:
web:
volumes:
- ./app:/app
ports:
- "5000:80"
environment:
- FLASK_APP=main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
command: flask run --host=0.0.0.0 --port=80
# Infinite loop, to keep it alive, for debugging
# command: bash -c "while true; do echo 'sleeping...' && sleep 10; done"
selenium-hub:
image: selenium/hub:3.141
container_name: selenium-hub
ports:
- 4444:4444
chrome:
shm_size: 2g
volumes:
- /dev/shm:/dev/shm
image: selenium/node-chrome:3.141
# image: selenium/standalone-chrome:3.141.59-copernicium
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
What is strange is that when I run the Flask application in Pycharm, and the selenium grid is up in docker, I am able to get the data back through http://localhost:5000. The issue is only happening when the Flask app is running inside docker.
Thanks for the help in advance, let me know if you require further information.
Edit
So I amended my docker-compose.yml file to include a network (updated the code in github. As I've had the Flask app code running in debug and in a volume, any update to the code results in a refresh of the debugger.
I ran docker network inspect on the created network, and found the local docker IP address of selenium-hub. I updated the app/utils.py code, in get_driver() to use the IP address in command_executor rather than localhost. Saving, and re-running from my browser results in a successful return of data.
But I don't understand why http://localhost:4444/wd/hub would not work, the docker containers should see each other in the network as localhost, right?
the docker containers should see each other in the network as localhost, right?
No, this is only true when they use the host networking and expose ports through the host.
When you have services interacting with each other in docker-compose (or stack) the services should refer to each other by the service name. E.g. you would reach the hub container at http://selenium-hub:4444/wd/hub. Your Flask application could be reached by another container on the same network at http://web
You may be confused if your default when running docker normally is to use host networking because on the host network selenium-hub is also exposed on the same port 4444. So, if you started a container with host networking it could use http://localhost:4444 just fine there as well.
Could potentially be a port in use issue related to the execution?
See:
Python urllib2: Cannot assign requested address
I have an app in python that I want to run in a docker container and it has a line:
h2o.connect(ip='127.0.0.1', port='54321')
The h2o server is running in docker container and it always has different ip. One time it was started on 172.19.0.5, the other time 172.19.0.3, sometimes 172.17.0.3.
So it is always random, and I can't connect the python app.
I tried to expose the port of h2o server to localhost and then connect the python (the code above), but it is not working.
You dont connect two docker containers though ip addresses. Instead, you want to use docker internal network aliases:
version: '3'
services:
server:
...
depends_on:
- database
database:
...
expose:
- 54321:54321
then you can define your connectio in server as:
h2o.connect(ip='127.0.0.1', port='54321')
I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.