EDIT
I think what is going on is that localhost inside the docker process refers to the container's own localhost, not my system's localhost. So how do I ensure that when the application running the container tries to connect to the container's localhost:9200, it actually connects to my system's localhost:9200?
When I visit localhost:9200, my ES application seems to be running. It looks like this in chrome:
{
"name" : "H1YDvcg",
"cluster_name" : "elasticsearch_jwan",
"cluster_uuid" : "aAorzRYTQPOI0j_OgMGKpA",
"version" : {
"number" : "6.8.1",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "1fad4e1",
"build_date" : "2019-06-18T13:16:52.517138Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
I am running ES in a terminal window and it works after I run the command elasticsearch.
I am running a docker container with this command:
docker run -e DATALOADER_QUEUE='<some aws SQS queue name'\
-e ES_HOST='localhost'\
-e ES_PORT='9200'\
-e AWS_ACCESS_KEY_ID='<somekey>'\
-e AWS_SECRET_ACCESS_KEY='<somekey>'\
-e AWS_DEFAULT_REGION='us-west-2'\
<application name>
and I get this error:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=9200): Max retries exceeded with url: /person/_search (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f36e4189c90>
Anyone know what is going on? I don't understand why it cannot connect to ES even though ti seems to be running on localhost:9200.
Solution was to use host.docker.internal in ES host setup.
I just used es_client = Elasticsearch(host=_es_host, where es_host = host.docker.internal and made sure to use http while local instead of https.
Elasticsearch is running on your host at port 9200 and the application which want to access elasticsearch is running inside container.
The docker container by default runs in bridge networking mode, in which host and container network are different. Hence localhost inside container is not the same as on host.
Here you can do two things:
In your application code try to access elasticsearch using private/public-ip:9200
OR
Run docker container in host networking mode, so that localhost inside container is same as that on host. Because in this mode container uses network of host.
docker run -e DATALOADER_QUEUE='<some aws SQS queue name'\
-e ES_HOST='localhost'\
-e ES_PORT='9200'\
-e AWS_ACCESS_KEY_ID='<somekey>'\
-e AWS_SECRET_ACCESS_KEY='<somekey>'\
-e AWS_DEFAULT_REGION='us-west-2'\
--net=host \
<application name>
NOTE: --net=host option will tell docker container to use host networking mode.
You can bind a host port to docker port using the -p option docker command. So in your below docker command, you can add one more line for binding the host 9200 port to docker 9200 port. notice I have added -p 9200:9200 and same is explained nicely in point 3 this doc
docker run -p 9200:9200 -e DATALOADER_QUEUE='<some aws SQS queue name'\
-e ES_HOST='localhost'\
-e ES_PORT='9200'\
-e AWS_ACCESS_KEY_ID='<somekey>'\
-e AWS_SECRET_ACCESS_KEY='<somekey>'\
-e AWS_DEFAULT_REGION='us-west-2'\
<application name>
Related
I have a web project which I have hosted on the server. Frontend is angular, backend is flask and database is mongodb and all are made as a docker container linking with each other.
I am doing flowing steps
Backend Hosting
Backend is created as a docker container at 10.31.61.52.
Use “docker build . -t backend” to create docker image.And
after it is build use - “ docker run -itd -p 5000:5000 --linkmongodb:mongodb --name skybridge_backend backend” to run this container.Your backend is good to go!
2 Frontends Hosting
Frontend is created as a docker container at 10.31.61.52
docker build . -t frontend” to create docker image.And
after it is build use - “ docker run -itd -p 80:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend” torun this container
instead of 80 want to use 8080 as host port so that my URL would be below.
Then just want to access the URL like this http://10.31.61.52:8080/login
You specify the port redirection from the host machine to the container using docker run -p <host port>:<container port>. Therefore changing the port from 80 to 8080 in the command to run the frontend container like this should do the trick:
docker run -itd -p 8080:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend
when i run this command docker run -itd -p 80:9000 --linkskybridge_backend:skybridge_backend --name skybridge_frontend frontend
linking is happening but internally its referring port 80 in the url http://10.31.61.52/login and page is displaying
but want to change it to port 8080 so that my URL should be http://10.31.61.52:8080/login..
when i try to access this url page not opening.
http://10.31.61.52/login -- Login screen come up
http://10.31.61.52:8080/login -- Login screen not coming
var arr = [
{
"TOTAL_USAGE":17243,
"MNTH":"NOV",
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"AUG",
"TOTAL_USAGE":1104,
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"OCT",
"TOTAL_USAGE":45,
"SERVICE_NAME":"CustomerConsumerEmcsfdc"
},
{
"MNTH":"JUL",
"SERVICE_NAME":"CustomerConsumerEmcsfdc",
"TOTAL_USAGE":19747
},
{
"TOTAL_USAGE":1539,
"SERVICE_NAME":'CustomerConsumerEmcsfdc',
"MNTH":"SEP"
}
];
I have a streamlit application (localhost:8501) and an API (127.0.0.1:8000).
My streamlit application tries to access the API.
It works well when I launch the command to start "streamlit". But when streamlit is in a Docker container, I'm not able to access the URL. I have this error:
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /predict (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2a5ebae090>: Failed to establish a new connection: [Errno 111] Connection refused'))
This is my dockerfile:
FROM tiangolo/uvicorn-gunicorn:python3.7
ENV PYTHONPATH .
RUN mkdir /streamlit
COPY requirements.txt /streamlit
WORKDIR /streamlit
RUN pip install -r requirements.txt
COPY . /streamlit
EXPOSE 8501
CMD ["streamlit", "run", "web/source/web_api.py"]
And the commands I launch:
docker build --tag web_streamlit.
docker run --publish 8501:8501 --detach --name web_streamlit_container web_streamlit
I'm able to access localhost:8501. Inside I have a streamlit application that is unable to reach localhos:8000 which is another service.
To access service that is running on you have special DNS for mac and window.
host.docker.internal, which resolves to the internal IP address used by the host.
So replace localhost:8000 with host.docker.internal:8000
special DNS for mac and window
or if you Linux then set host to resolve the special DNS for your self.
docker run --publish 8501:8501 -it --rm --add-host="host.docker.internal:192.168.9.100" --name web_streamlit_container web_streamlit
Where 192.168.9.100 is the host IP that you can get using ifconfig in linux.
So you have flexible DNS that will work all platform Linux,window and mac you do not need to modify code.
If you try to ping localhost from the container, the container will ping itself :)
If you want to reach a service that's running on the host itself, from the container, you need to fetch the IP of your docker0 interface:
On your host machine: run ip addr show docker0 to find it
neo#neo-desktop:~$ ip addr show docker0
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:33:70:c8:74 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:33ff:fe70:c874/64 scope link
valid_lft forever preferred_lft forever
Fetch the IP (172.17.0.1 in my example) and use 172.17.0.1:8000 to reach your API form the continer.
PS: works on Linux. If you run Docker on Win or Mac, you need to do something else.
Technical overview of your problem & quick fix ☑️
__
Lets assume you have 2 docker images
1st image : Fastapi or any other API backend wrapped using Docker.
2nd image : Streamlit Frontend wrapped using Docker
Now , when you request API from streamlit docker to another docker eg (localhost://predict) it will not work.
Beacuse you are requested API current localhost to same docker localhost and its pointing to itself.
So you need to change that localhost present in your streamlit script to required API host (means another docker IP)
To finding that IP hit following command :-
ip addr show docker0
It will give you ip address something like 172.17.0.1/16.
Now you have change your streamlit localhost request url to ,
http://172.17.0.1/
Then build the streamlit docker again.
docker build -t frontend: version-1
Now run the streamlit docker by,
docker run --rm -it -p 8501/8501/tcp frontend:version-1
Done ✅ 😃
I want to use scrapinghub/splash container on Azure App Service (Web App for Containers) on Linux.
But docker run command on deploy randomly changes the binding port of container side (see the log below, port 8961 is automatically assigned. this number varies every deploy)
2020-01-21 08:56:47.494 INFO - docker run -d -p 8961:8050 --name b2scraper-splash_3_d89ce1f2 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8050 -e WEBSITE_SITE_NAME=b2scraper-splash -e WEBSITE_AUTH_ENABLED=False -e PORT=8050 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=b2scraper-splash.azurewebsites.net -e WEBSITE_INSTANCE_ID=5446f93a2cbcb25300f091395c54ce738773ce47489c2818322ffabbc23e3413 scrapinghub/splash:latest python3 /app/bin/splash --proxy-profiles-path /etc/splash/proxy-profiles --js-profiles-path /etc/splash/js-profiles --filters-path /etc/splash/filters --lua-package-path "/etc/splash/lua_modules/?.lua" --disable-private-mode --port 8050
Changing host port binding is possible using WEBSITES_PORT, but seems no way to change container side.
Is there way to fix container-side port binding like -p 8050:80 or -p 8050:443 of docker run command?
e.g. Using the container on Azure Container Instances is possible, without changing service port 8050.
--publish in the docker run command creates a firewall rule which maps a container port to a port on the Docker host.
https://docs.docker.com/config/containers/container-networking/
For the command: docker run -d -p 8961:8050 imagename, TCP port 8050 in the container is mapped to 8961 on the Docker Host. On App Services, this docker run command cannot be changed. The container port (8050 in this case) can be set to a specific value using WEBSITES_PORT application setting.
That doesn´t work. you get 443 as port with HTTPS.
Neither EXPOSE XXXX, nor WEBSITES_PORT or PORT as configuration parameters...
You do see "docker run -d -p 8961:8050" in the logs, but it doesn´t matter to Azure when it comes to exposing the app...
I'm new to docker, redis and any kind of networking, (I know python at least!). Firstly I have figured out how to get a redis docker image and run it in a docker container:
docker run --name some-redis -d redis
As I understand this redis instance has port 6379 available to connect to other containers.
docker network inspect bridge
"Containers": {
"2ecceba2756abf20d5396078fd9b2ecf0d60ab04ca6b8df5e1b631b6fb5e9a85": {
"Name": "some-redis",
"EndpointID": "09f0069dae3632a2456cb4d82ad5e7c9782a2b58cb7a4ee655f57b5c410c3e87",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
If I run the following command I can interact with the redis instance and generate key:value pairs:
docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
set 'a' 'abc'
>OK
get 'a'
>"abc"
quit
I have figured out how to make and run a docker container with the redis library installed that will run a python script as follows:
Here is my Dockerfile:
FROM python:3
ADD redis_test_script.py /
RUN pip install redis
CMD [ "python", "./redis_test_script.py" ]
Here is redis_test_script.py:
import redis
print("hello redis-py")
Build the docker image:
docker build -t python-redis-py .
If I run the following command the script runs in its container:
docker run -it --rm --name pyRed python-redis-py
and returns the expected:
>hello redis-py
It seems like both containers are working ok, the problem is connecting them both together, I would like to ultimately use python to perform operation on the redis container. If I modify the script as follows and rebuild the image for the python container it fails:
import redis
print("hello redis-py")
r = redis.Redis(host="localhost", port=6379, db=0)
r.set('z', 'xyz')
r.get('z')
I get several errors:
...
OSError: [Errno 99] Cannot assign requested address
...
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
.....
It looks like they're not connecting, I tried again using the bridge IP in the python script:
r = redis.Redis(host="172.17.0.0/16", port=6379, db=0)
and get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.0/16:6379. Name or service not known.
and I tried the redis sub IP:
r = redis.Redis(host="172.17.0.2/16", port=6379, db=0)
and I get this error:
redis.exceptions.ConnectionError: Error -2 connecting to 172.17.0.2/16:6379. Name or service not known.
It feels like I'm fundamentally misunderstanding something about how to get the containers to talk to each other. I've read quite a lot of documentation and tutorials but as I say have no networking experience and have not previously used docker so any helpful explanations and/or solutions would be really great.
Many thanks
That's all about Docker networking. Fast solution - use host network mode for both containers. Drawback is low isolation, but you will get it working fast:
docker run -d --network=host redis ...
docker run --network=host python-redis-py ...
Then to connect from python to redis just use localhost as a hostname.
Better solution is to use docker user-defined bridge network
# create network
docker network create foo
docker run -d --network=foo --name my-db redis ...
docker run --network=foo python-redis-py ...
Note that in this case you cannot use localhost but instead use my-db as a hostname. That's why I've used --name my-db parameter when starting first container. In user-defined bridge networks containers reach each other by theirs names.
Do:
Explicitly create a Docker network for your application, and run your containers connected to that network. (If you use Docker Compose, this happens for you automatically and you don’t need to do anything.)
docker network create foo
docker run -d --net foo --name some-redis redis
docker run -it --rm --net foo --name pyRed python-redis-py
Use containers’ --name as DNS hostnames: you connect to some-redis:6379 to reach the container. (In Docker Compose the name of the service block works too.)
Make the locations of external services configurable, most likely using an environment variable. In your Python code you can connect to
redis.Redis(host=os.environ.get("REDIS_HOST", "localhost"),
port=int(os.environ.get("REDIS_PORT", "6379"))
docker run --rm -it \
--name py-red \
--net foo \
-e REDIS_HOST=some-redis \
python-redis-py
Don’t:
docker inspect anything to find the container-private IP addresses. Between containers you can always use hostnames as described above. The container-private IP addresses are unreachable from other hosts, and may even be unreachable from the same hosts on some platforms.
Use localhost in Docker for anything, expect the specific case of connecting from a browser or other process running directly on the host (not in a container) to a port you’ve published with docker run -p on the same host. (It generally means “this container”.)
Hard-code host names in your code like this; it makes it hard to run the service in a different environment. (For databases in particular it’s not uncommon to run them outside of Docker or even in a hosted cloud service.)
Use --link, it’s outdated and unnecessary.
I'm getting a connection refused after building my Docker image and running docker run -t imageName
Inside the container my python script is making web requests (external API call) and then communicating over localhost:5000 to a logstash socket.
My dockerfile is really simple:
FROM ubuntu:14.04
RUN apt-get update -y
RUN apt-get install -y nginx git python-setuptools python-dev
RUN easy_install pip
#Install app dependencies
RUN pip install requests configparser
EXPOSE 80
EXPOSE 5000
#Add project directory
ADD . /usr/local/scripts/
#Set default working directory
WORKDIR /usr/local/scripts
CMD ["python", "logs.py"]
However, I get a [ERROR] Connection refused message when I try to run this. It's not immediately obvious to me what I'm doing wrong here - I believe I'm opening 80 and 5000 to the outside world? Is this incorrect? Thanks.
Regarding EXPOSE:
Each container you run has its own network interface. Doing EXPOSE 5000 tell docker to link a port 5000 from container-network-interface to a random port in your host machine (see it with docker ps), as long as you tell docker to do it when you docker run with -P.
Regarding logstash.
If your logstash is installed in your host, or in another container, it means that logstash is not in the "localhost" of the container (remember that each container has its own network interface, each one has its own localhost). So you need to point to logstash properly.
How?
Method 1:
Don't give container its own iterface, so it has the same localhost as your machine:
docker run --net=host ...
Method 2:
If you are using docker-compose, use the docker network linking. i.e:
services:
py_app:
...
links:
- logstash
logstash:
image: .../logstash..
So point as this: logstash:5000 (docker will resolve that name to the internal IP corresponding to logstash)
Method 3:
If logstash listen in your localhost:5000 (from your host), you can point to it as this: 172.17.0.1:5000 from inside your container (the 172.17.0.1 is the host fixed IP, but this option is less elegant, arguably)