I have dockerized a simple OPC UA server. When I run it locally, I am capable of connecting to the server without problems. However, when I run the server in a Docker container the client refuses to connect. Further, when I try to set the endpoint for the server as opc.tcp://localhost:4840, the server will not bind to the address when it is ran inside a container. The endpoint opc.tcp://127.0.0.1:4840 must be used. This is not an issue when running the server locally. The following library is used to implement the server https://github.com/FreeOpcUa/python-opcua and the client used is https://github.com/FreeOpcUa/opcua-client-gui.
I have tried to set different endpoints without any luck.
The server implementation is:
from opcua import Server, ua
server = Server()
server.set_endpoint('opc.tcp://127.0.0.1:4840')
server.set_security_policy([ua.SecurityPolicyType.NoSecurity])
server.start()
try:
while True:
i = 1
finally:
server.stop()
The 'Dockerfile' exposes the following port EXPOSE 4840. The Docker run command is
docker run --rm --name server -p 4840:4840 opcua
You server in container is only listening to 127.0.0.1, hence only accepting connection from inside the container:
server.set_endpoint('opc.tcp://127.0.0.1:4840')
You should listen to all hosts such as:
server.set_endpoint('opc.tcp://0.0.0.0:4840')
you need to use --network host in your docker run command , since localhost on the conatiner is not your host
Related
Here is what I am trying to achieve:
SSH into an EC2 instance Node1.
Public IP is available for Node1.
I am using a .pem file to create Connection with Node1.
From Node1 ssh into localhost on port 2022: ssh admin#localhost -p 2022
Now execute a command, while inside localhost.
Here is the code snippet I am using:
from fabric2 import Connection
jump_host_ip = "A.B.C.D"
user = "root_user"
pem_key = "example.pem"
with Connection(jump_host_ip, user=user, connect_kwargs={"key_filename": pem_key}) as jump_host:
with Connection('localhost', user='dummy_user', port=2022,
connect_kwargs={'password': 'password'}, gateway=jump_host) as local_host:
local_host.run('ls -la')
This code is hosted on another EC2 server. And when executed from the EC2 server it throws the following exception:
paramiko.ssh_exception.AuthenticationException: Authentication failed.
But this code works when executed from a local machine (not from EC2 server).
Is it possible EC2 could be blocking the Connection to localhost through gateway ?
If yes, what should be the fix for this ?
I have a streamlit application (localhost:8501) and an API (127.0.0.1:8000).
My streamlit application tries to access the API.
It works well when I launch the command to start "streamlit". But when streamlit is in a Docker container, I'm not able to access the URL. I have this error:
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /predict (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2a5ebae090>: Failed to establish a new connection: [Errno 111] Connection refused'))
This is my dockerfile:
FROM tiangolo/uvicorn-gunicorn:python3.7
ENV PYTHONPATH .
RUN mkdir /streamlit
COPY requirements.txt /streamlit
WORKDIR /streamlit
RUN pip install -r requirements.txt
COPY . /streamlit
EXPOSE 8501
CMD ["streamlit", "run", "web/source/web_api.py"]
And the commands I launch:
docker build --tag web_streamlit.
docker run --publish 8501:8501 --detach --name web_streamlit_container web_streamlit
I'm able to access localhost:8501. Inside I have a streamlit application that is unable to reach localhos:8000 which is another service.
To access service that is running on you have special DNS for mac and window.
host.docker.internal, which resolves to the internal IP address used by the host.
So replace localhost:8000 with host.docker.internal:8000
special DNS for mac and window
or if you Linux then set host to resolve the special DNS for your self.
docker run --publish 8501:8501 -it --rm --add-host="host.docker.internal:192.168.9.100" --name web_streamlit_container web_streamlit
Where 192.168.9.100 is the host IP that you can get using ifconfig in linux.
So you have flexible DNS that will work all platform Linux,window and mac you do not need to modify code.
If you try to ping localhost from the container, the container will ping itself :)
If you want to reach a service that's running on the host itself, from the container, you need to fetch the IP of your docker0 interface:
On your host machine: run ip addr show docker0 to find it
neo#neo-desktop:~$ ip addr show docker0
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:33:70:c8:74 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:33ff:fe70:c874/64 scope link
valid_lft forever preferred_lft forever
Fetch the IP (172.17.0.1 in my example) and use 172.17.0.1:8000 to reach your API form the continer.
PS: works on Linux. If you run Docker on Win or Mac, you need to do something else.
Technical overview of your problem & quick fix ☑️
__
Lets assume you have 2 docker images
1st image : Fastapi or any other API backend wrapped using Docker.
2nd image : Streamlit Frontend wrapped using Docker
Now , when you request API from streamlit docker to another docker eg (localhost://predict) it will not work.
Beacuse you are requested API current localhost to same docker localhost and its pointing to itself.
So you need to change that localhost present in your streamlit script to required API host (means another docker IP)
To finding that IP hit following command :-
ip addr show docker0
It will give you ip address something like 172.17.0.1/16.
Now you have change your streamlit localhost request url to ,
http://172.17.0.1/
Then build the streamlit docker again.
docker build -t frontend: version-1
Now run the streamlit docker by,
docker run --rm -it -p 8501/8501/tcp frontend:version-1
Done ✅ 😃
I'm trying to connect a Thrift client (client) to a Thrift server (server) on the same host; the server and client must be in separate docker containers.
I'm using the python implementation of Apache Thrift, Thriftpy v0.3.9. The host is Ubuntu 16.04, Docker is version 18.06.0-ce, and docker-compose is version 1.17.0. I'm using a python:3.6.6-alpine3.8 image.
I can successfully connect the client to the server on the same host so long as they're not containerized. However, I need them in containers.
My docker-compose.yml:
version: "3"
services:
thrift_client:
build: .
ports:
- "6002:6000"
links:
- thrift_server
thrift_server:
image: thrift_server
ports:
- "6001:6000"
The server runs successfully. However, the client makes the following exception:
"Could not connect to %s" % str(addr))
thriftpy.transport.TTransportException: TTransportException(type=1, message="Could not connect to ('thrift_server', 6000)")
I'm following this little demo linked below with only slight deviations so as to do it with docker. https://thriftpy.readthedocs.io/en/latest/
My pinpong.thrift file:
service PingPong {
string ping(),
}
thrift_server.py:
import thriftpy
pingpong_thrift = thriftpy.load("pingpong.thrift", module_name="pingpong_thrift")
from thriftpy.rpc import make_server
class Dispatcher(object):
def ping(self):
return "pong"
server = make_server(pingpong_thrift.PingPong, Dispatcher(), 'localhost', 6000)
server.serve()
thrift_client.py:
import thriftpy
pingpong_thrift = thriftpy.load("pingpong.thrift", module_name="pingpong_thrift")
from thriftpy.rpc import make_client
client = make_client(pingpong_thrift.PingPong, 'thrift_server', 6000)
client.ping()
Again, this works fine without using Docker on the host. Of course, I use 'localhost' in lieu of 'thrift_server' for the client when doing it without Docker.
The goal is to experiment making calls from the thrift client to the thrift server on the same host. Therefore, docker-compose is not necessary for this simple learning example.
First of all, the ports used in the above question are all wrong. Both the thrift client and thrift server must listen to the same port on the host.
Therefore, I replaced this line for the thrift client
client = make_client(pingpong_thrift.PingPong, 'thrift_server', 6000)
to the following:
client = make_client(pingpong_thrift.PingPong, 'localhost', 6000)
Docker bridge doesn't allow 2 or more containers to listen to the same port. Therefore, I use the host network. To my understanding you can connect to the host network on docker for Linux (not sure about Windows) but not Mac.
In any case, I simply did the following:
$ docker run -it --network=host --name thrift_server_container -p=0.0.0.0:6000:6000 thrift_server python thrift_server.py
Then in another terminal:
$ docker run -it --network=host --name thrift_client_container -p=0.0.0.0:6000:6000 thrift_client python
The second command will put you in the python repl of the client. You can then start an instance of the thrift client in the repl and ping the thrift server. Of course you could run thrift_client.py in the second command instead, but I found it easier to experiment with thrift in the repl.
I have a client.py sending data to (server_ip,60000). The server side code, which receives data, sits inside a docker container. The codes are in Python and the server runs on Mac OS. Before migrating to docker, I could successfully transmit data. After dockerizing the server.py code, the bind happens, but client.py at connection.sendall(out) says:
socket.error: [Errno 32] Broken pipe
Here is my docker-compose.yml:
version: '2'
services:
server:
build: ./server
ports:
- server_IP:60000:60000
and here is the binding inside server.py:
port = 60000
host = "localhost"
Any idea why this happens?
Well, I could fix it by setting the host at server side to 0.0.0.0 inside docker and removing-rebuilding the image. All works fine.
I have an app in python that I want to run in a docker container and it has a line:
h2o.connect(ip='127.0.0.1', port='54321')
The h2o server is running in docker container and it always has different ip. One time it was started on 172.19.0.5, the other time 172.19.0.3, sometimes 172.17.0.3.
So it is always random, and I can't connect the python app.
I tried to expose the port of h2o server to localhost and then connect the python (the code above), but it is not working.
You dont connect two docker containers though ip addresses. Instead, you want to use docker internal network aliases:
version: '3'
services:
server:
...
depends_on:
- database
database:
...
expose:
- 54321:54321
then you can define your connectio in server as:
h2o.connect(ip='127.0.0.1', port='54321')