Trouble Remotely Connecting Flask App to Selenium Grid using Docker - python

I am new to both Docker and Selenium grid and am having issues getting my web app to connect to the selenium hub.
docker-compose.yml
version: '3.8'
services:
db:
image: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- "${POSTGRES_PORT}:5432"
volumes:
- pgdata:/var/lib/postgresql/data
web:
build:
context: ..
dockerfile: docker/Dockerfile
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_CONFIG: ${FLASK_CONFIG}
APPLICATION_DB: ${APPLICATION_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_HOSTNAME: "db"
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_PORT: ${POSTGRES_PORT}
command: flask run --host 0.0.0.0
volumes:
- ..:/opt/code
ports:
- "5000:5000"
chrome:
image: selenium/node-chrome:4.0.0-20211013
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6900:5900"
edge:
image: selenium/node-edge:4.0.0-20211013
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6901:5900"
firefox:
image: selenium/node-firefox:4.0.0-20211013
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6902:5900"
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
volumes:
pgdata:
When running the stack of containers and checking netstat -a, I can see my desktop listening to port 4444 and when I kill the containers its not.
I can also verify that the hub is running and all of my nodes are connecting fine by visiting https//:localhost/4444, however when I run driver = webdriver.Remote(command_executor="http://localhost:4444") from my python flask app (which is running in the container specified as web above) I get the error:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=4444): Max retries
exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection
object at 0x7fd4855b7730>: Failed to establish a new connection: [Errno 111] Connection refused'))
I have tried specifying the desired capabilities to match the driver for specific nodes, however I receive this same error regardless.
I am using Selenium's latest build "4.0.0" and as you can see 4.0.0 images for the parts of the grid so I don't think its a compatibility issue.
docker ps
Name Command State Ports
------------------------------------------------------------------------------------------------------------------------
development_chrome_1 /opt/bin/entry_point.sh Up 0.0.0.0:6900->5900/tcp,:::6900->5900/tcp
development_db_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp,:::5432->5432/tcp
development_edge_1 /opt/bin/entry_point.sh Up 0.0.0.0:6901->5900/tcp,:::6901->5900/tcp
development_firefox_1 /opt/bin/entry_point.sh Up 0.0.0.0:6902->5900/tcp,:::6902->5900/tcp
development_web_1 flask run --host 0.0.0.0 Up 0.0.0.0:5000->5000/tcp,:::5000->5000/tcp
selenium-hub /opt/bin/entry_point.sh Up 0.0.0.0:4442->4442/tcp,:::4442->4442/tcp,
0.0.0.0:4443->4443/tcp,:::4443->4443/tcp,
0.0.0.0:4444->4444/tcp,:::4444->4444/tcp
I feel like I'm fundamentally missing something here, Any thoughts?

I see the mistake now. I was mistakenly attempting to connect to http://localhost:4444 with my client, when I was needing to specify the network name deployed by selenium grid.
Fix
Change this line in your flask_app.py
driver = webdriver.Remote(command_executor="http://localhost:4444")
To:
driver = webdriver.Remote(command_executor="http://container-name:4444")
Where container-name is the selenium hub name set in docker-compose.yml
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
in my case: "selenium-hub"
Resource on docker networking used: https://docs.docker.com/compose/networking/
Final Thoughts
I guess I got tripped up by the fact that I am still using http://localhost:port to connect to both the grid hub and web container. I guess the difference is where the client request comes from? From outside the docker stack vs within? Anyway, hope this helps someone.

Related

Access mariadb docker-compose container from host-machine

I'm trying to access a mariadb-container from a python script on my host-machine (MacOS).
I tried all network_modes (host, bridge, default), but nothing works.
I was able to connect to the container through phpmyadmin, but only if both containers are in the same docker-compose-network.
Here is my docker-compose.yml with the attempt on network_mode host:
version: '3.9'
services:
mariadb:
image: mariadb:10.9.1-rc
container_name: mariadb
network_mode: bridge
ports:
- 3306:3306
volumes:
- ...
environment:
- MYSQL_ROOT_PASSWORD=mysqlroot
- MYSQL_PASSWORD=mysqlpw
- MYSQL_USER=test
- MYSQL_DATABASE=test1
- TZ=Europe/Berlin
phpmyadmin:
image: phpmyadmin:5.2.0
network_mode: bridge
container_name: pma
# links:
# - mariadb
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
- TZ=Europe/Berlin
ports:
- 8081:80
Any tips on how I get access to the container through the python mariadb package?
Thanks!
Every thing seems okay, just check the params when trying to connect to the db:
host: 0.0.0.0
port: 3306 (as in the docker-compose)
user: test (as in the docker-compose)
password: mysqlpw (as in the docker-compose)
database: test1 (as in the docker-compose)
example:
db = MySQLdb.connect("0.0.0.0","test","mysqlpw","test1")

Docker Network Host on Ubuntu

I have a Django REST service and another Flask service that work as a broker for the application. Both are different projects that run with their own Docker container.
I'm able to POST a product on the Django service that is consumed by the Flask service, however, I cannot reach the Django service via Flask.
These containers are running on the same network, and I already tried Thomasleveil's suggestions, including docker-host by qoomon.
The error received by the request is the same as before I tried to forward the traffic. The difference is that now when I do the request it keeps hanging for a while until it fails.
The error is as follows:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='172.17.0.1', port=8000): Max retries exceeded with url: /api/user (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f0039388340>: Failed to establish a new connection: [Errno 110] Connection timed out'))
The request I'm trying to make is a POST at /api/products/1/like. At the moment, no body is required.
Here is how I'm doing the POST with Flask, where the IP is the Docker IP:
#app.route("/api/products/<int:id>/like", methods=["POST"])
def like(id):
req = requests.get("http://172.17.0.1:8000/api/user")
json = req.json()
try:
product_user = ProductUser(user_id=json["id"], product=id)
db.session.add(product_user)
db.session.commit()
publish("product_liked", id)
except:
abort(400, "You already liked this product")
return jsonify({
"message": "success"
})
Django's docker compose file (please ignore the service tcp_message_emitter):
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
dockerhost:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
restart: on-failure
networks:
- backend
tcp_message_emitter:
image: alpine
depends_on:
- dockerhost
command: [ "sh", "-c", "while :; do date; sleep 1; done | nc 'dockerhost' 2323 -v"]
networks:
- backend
networks:
backend:
driver: bridge
Flask's docker compose file:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
At this point, I know that I am missing some detail or that I've a misconfiguration.
You can have a look at the repo here: https://github.com/lfbatista/ms-ecommerce
Any help would be appreciated.
These containers are not actually on the same network. To put two containers from different docker-compose projects into one network you need to 'import' an existing network in one of the files. Here's how you can do it:
# first project
networks:
internal:
shared:
---
# second project
networks:
internal:
shared:
# This is where all the magic happens:
external: true # Means do not create a network, import existing.
name: admin_shared # Name of the existing network. It's usually made of <folder_name>_<network_name> .
Do not forget to put all services into the same internal network or they will not be able to communicate with each other. If you forget to do that Docker will create a <folder_name>-default network and put any container with no explicitly assigned network there. You can assign networks like this:
services:
backend:
...
networks:
internal:
# Since this service needs access to the service in another project
# you put here two networks.
shared:
# This part is relevant for this specific question because
# both projects has services with identical names. To avoid
# mess with DNS names you can add an additional name to the
# service using 'alias'. This particular service will be
# available in shared network as 'flask-backend'.
aliases:
- flask-backend
db:
...
# You can also assign networks as an array if you need no extra configuration:
networks:
- internal
And here are the files from your repository. Instead of IP-address one service can reach the other via flask-backend or django-backend respectively. Note that I cut out those strange 'host network containers'.
admin/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python manage.py runserver 0.0.0.0:8000"
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
networks:
internal:
shared:
aliases:
- django-backend
queue:
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
networks:
- internal
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
networks:
- internal
networks:
internal:
shared:
main/docker-compose.yml:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: "python main.py"
networks:
internal:
shared:
aliases:
- flask-backend
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
queue:
networks:
- internal
build:
context: .
dockerfile: Dockerfile
command: "python consumer.py"
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
networks:
- internal
environment:
MYSQL_DATABASE: main
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
networks:
internal:
shared:
external: true
name: admin_shared

Django server inside docker is causing robot tests to give blank screens?

I have built an environment inside docker compose in order to run robot tests. The environment consists of django web app, postgres and robot framework container. The Problem I have is that I get many blank screens in different tests, while using external Django web app instance which is installed on a virtual machine doesn't have this problem.
The blank screen causes that elements are not found hence so many failures:
JavascriptException: Message: javascript error: Cannot read property 'get' of undefined
(Session info: headless chrome=84.0.4147.89)
I am sure that the problem is with the Django app container itself not robot container since as said above I have tested with the same environment but against different web app which is installed outside Docker, and it worked.
docker-compose.yml:
version: "3.6"
services:
redis:
image: redis:3.2
ports:
- 6379
networks:
local:
ipv4_address: 10.0.0.20
smtpd:
image: mysmtpd:1.0.5
ports:
- 25
networks:
- local
postgres:
image: mypostgres
build:
context: ../dias-postgres/
args:
VERSION: ${POSTGRES_TAG:-12}
hostname: "postgres"
environment:
POSTGRES_DB: ${POSTGRES_USER}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
networks:
local:
ipv4_address: 10.0.0.100
ports:
- 5432
volumes:
- my-postgres:/var/lib/postgresql/data
app:
image: mypyenv:${PYENV_TAG:-1.1}
tty: true
stdin_open: true
user: ${MY_USER:-jenkins}
networks:
local:
ipv4_address: 10.0.0.50
hostname: "app"
ports:
- 8000
volumes:
- ${WORKSPACE}:/app
environment:
ALLOW_HOST: "*"
PGHOST: postgres
PGUSER: ${POSTGRES_USER}
PGDATABASE: ${POSTGRES_USER}
PGPASSWORD: ${POSTGRES_PASSWORD}
ANONYMIZE: "false"
REDIS_HOST: redis
REDIS_DB: 2
APP_PATH: ${APP_PATH}
APP: ${MANDANT}
TIMER: ${TIMER:-20}
EMAIL_BACKEND: "dias.core.log.mail.SmtpEmailBackend"
EMAIL_HOST: "smtpd"
EMAIL_PORT: "25"
robot:
image: myrobot:${ROBOT_TAG:-1.0.9}
user: ${ROBOT_USER:-jenkins}
networks:
local:
ipv4_address: 10.0.0.70
volumes:
- ${WORKSPACE}:/app
- ${ROBOT_REPORTS_PATH}:/APP_Robot_Reports
environment:
APP_ROBOT: ${APP_ROBOT}
TIMER: ${TIMER:-20}
PGHOST: postgres
PGUSER: ${POSTGRES_USER}
PGDATABASE: ${POSTGRES_USER}
PGPASSWORD: ${POSTGRES_PASSWORD}
THREADS: ${THREADS:-4}
tty: true
stdin_open: true
entrypoint: start-robot
networks:
local:
driver: bridge
ipam:
config:
- subnet: 10.0.0.0/24
volumes:
my-postgres:
external: true
name: my-postgres
I have monitored the app stats and nothing is abnormal during testing. Also, manually tested the app in browser and it looks just good with nothing wrong about it.
Note: There is no mismatch between chromedriver and google chrome version (anyway this doesn't matter since the same robot container has worked with other instance where no Docker is used for the Django app)
Anyone has an idea ?
I didn't focus before that I run pabot with 8 processes while django app was started with 2 celery workers. As soon as I increased celery workers to 4 it worked. Not sure though if this is the actually cause but it made sense to me as well as it worked.
celery -A server -c ${CELERY_CONCURRENCY:-2} worker

'WARNING:: Cannot resolve hostname: [Errno -2] Name or service not known' when trying to connect Zookeeper with Pysolr

I am trying to run a docker-compose that will let me have a Zookeeper ensemble that manages my SolrCloud. Everything runs and from every way I've checked inside the container, my Zookeeper ensemble appears to be up and running. Instead, everytime I try to connect I get an error that the name or service could not be found.
I've tried using different docker-compose.ymls, I've tried changing the name of my containers in docker, I've tried changing up the ports in the connection string, I've tried changing the hostname in the connection string, and I've tried the localhost for the connection string.
solr1:
container_name: solr1
image: solr:5-slim
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
volumes:
- data:/var/solr
command: >
sh -c "solr-precreate users"
solr2:
image: solr:5-slim
container_name: solr2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr3:
image: solr:5-slim
container_name: solr3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
ports:
- 8983:8983
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
zoo1:
image: zookeeper:3.4
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
zoo2:
image: zookeeper:3.4
container_name: zoo2
restart: always
hostname: zoo2
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
zoo3:
image: zookeeper:3.4
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
- solr
and then my Python code is
import pysolr
def connect_solrcloud():
zookeeper = pysolr.ZooKeeper("zoo1:2181,zoo2:2181,zoo3:2181")
solr = pysolr.SolrCloud(zookeeper, "users")
solr.ping()
connect_solrcloud()
I would expect that the Zookeeper object is able to connect and then I would be able to access the "users" core I created in my docker container. Instead I get an error saying
WARNING:: Cannot resolve zoo1: [Errno -2] Name or service not known
WARNING:: Cannot resolve zoo2: [Errno -2] Name or service not known
WARNING:: Cannot resolve zoo3: [Errno -2] Name or service not known
I don't know if this is a docker-compose issue or if it's the way I set Zookeeper up. It appears no one else online has a problem here. They either have problems standing Zookeeper up or some issue once it's connected.
Found my issue. My web container did not include
networks:
- solr
so it wasn't able to access zookeeper.

python jupyter notebook in docker container connection to selenium/standalone-chrome

I use the following docker-compose.yml to run jupyter notebook based on jupyter/datascience-notebook:87210526f381 and selenium/node-chrome:
version: '3'
services:
selenium-hub:
image: selenium/hub:3.141.59-dubnium
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59-dubnium
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
networks:
- backend
nbdatascience:
container_name: nbdatascience
image: aabor/nbdatascience
build: nbdatascience/.
volumes:
- /home/$USER/py:/home/jovyan/work/py
- /home/$USER/.jupyter:/home/jovyan/.jupyter
ports:
- "10000:8888"
environment:
- TZ="Europe/Kiev"
restart: always
networks:
- backend
depends_on:
- chrome
networks:
backend:
driver: bridge
When all these containers up selenium hub is accessible at http://localhost:4444/, and jupyter lab at http://localhost:10000/lab.
I am trying to open web browser session from notebook executing the following python script:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
cap = DesiredCapabilities.CHROME
driver = webdriver.Remote(command_executor='localhost:4444', desired_capabilities=cap)
which gives me error message:
HTTPConnectionPool(host='localhost', port=4444): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4c17137278>: Failed to establish a new connection: [Errno 111] Connection refused',))
Correction: running this python script resolves the problem, driver is created and it is possible to navigate the Internet in headless mode:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
cap = DesiredCapabilities.CHROME
driver = webdriver.Remote(command_executor='http://selenium-hub:4444/wd/hub',desired_capabilities=cap)
How do I open connection to selenium chrome running in another docker container?
The documentation in SeleniumHQ/docker-selenium
lacks these details.
The documentation on docker network says that "Once connected, the containers can communicate using only another container’s IP address or name", so is it possible to call another container by name in python script, for example: driver = webdriver.Remote(command_executor='chrome', desired_capabilities=cap). I tried this command, but it gives me the same error: "connection refused".
Connect your selenium container to the same network backend and use selenium-hub:4444 as hostname instead of localhost:4444.
By the way, what's for do you declare network? It is created by docker-compose by default.
Also, there is no need to explicitly declare container_name - containers get the name of their service by default.
I suggest following changes:
docker-compose.yml
version: '3'
services:
selenium-hub:
image: selenium/hub:3.141.59-dubnium
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59-dubnium
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
nbdatascience:
image: aabor/nbdatascience
build: nbdatascience/.
volumes:
- /home/$USER/py:/home/jovyan/work/py
- /home/$USER/.jupyter:/home/jovyan/.jupyter
ports:
- "10000:8888"
environment:
- TZ="Europe/Kiev"
restart: always
depends_on:
- chrome
Also, if you dont connect to containers from outside, remove ports exposing.

Categories