How can I connect between containers? - python

I want client socket can enter the server socket via localhost:8080,
but I keep getting this error message
:
ConnectionRefusedError: [Errno 111] Connection refused
This is my server socket
import socket
import threading
serverSocket = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
serverPort = 8080
serverSocket.bind(('', serverPort))
serverSocket.listen(1)
connectionSocket, address = serverSocket.accept()
print('*****',str(address), 'has entered.*****')
and Dockerfile
FROM python:3.7.4-alpine3.10
WORKDIR /app
COPY . /app
CMD ["python", "serversocket.py"]
EXPOSE 8080
This is client socket
import socket
import threading
clientSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serverPort = 8080
clientSocket.connect(('localhost', serverPort))
print('*****You Enter the server.*****')
and Dockerfile
FROM python:3.7.4-alpine3.10
WORKDIR /app
COPY . /app
CMD ["python", "clientsocket.py"]
and finally is my docker-compose.yml file
version: "3"
services:
server:
image: server
ports:
- "8080:8080"
client:
image: client
How can I solve this problem?

Your server and your client run in a Docker network. The fact that you open the server port 8080 is only relevant for your local machine.
In the context of the client container, localhost is not your local machine, it's its own "local network". So, trying to connect to localhost:8080 won't work from the local container.
What's nice with docker-compose is that it implicitly creates a common Docker network and bind it to each containers defined. Thus, inside a container, you can reach its fellow containers by their name.
You can try it if you run a shell session from inside the container:
docker-compose exec client sh # Opens a prompt INSIDE the client container
ping server # Ping the hostname server. It will respond.
So, let's go back to your issue. You should do this in your client code:
# The hostname is the name of the container, "server"
clientSocket.connect(('server', serverPort))
As a side note, it would be better if the server hostname and the port are defined as environment variables. It will allow you to easily change those values (for example, in the docker-compose file) without having to rebuild the image.

Related

Connect to a python socket inside a docker-compose mediated docker from host

I am trying to create a python socket inside a docker and forward that port to its host machine, to where some other programs should be trying to connect.
For this, the dockerised running python is doing as follows:
# Python 3.8, inside docker
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(('0.0.0.0', 5000))
while True:
message, addrs = s.recvfrom(1480)
_log(f"DATA RECIEVED from {addrs}") # arbitrary logging function
Due to others programs not knowing in advance which range of ip's will be decided for the dockers network, i have to trust into docker's ports comunication. So, what i would like to achive is that, when the host is recievin any socket doing a connect in its 5000 port, it would be picked by the dockerised python.
Both server and dockerised image are ubuntus. The ip of the host in the local network is 192.168.1.141, the docker subnet is in 172.18.0.0, and docker-compose for the docker itself looks as follows:
docker-compose file
version: "3"
services:
my_docker:
container_name: python_socket
image: registry.gitlab.com/my_group/my_project:latest
volumes:
- ./configs:/configs
- ./logs:/configs/logs
- ./sniffing:/app/sniffing
ports:
- "5000:5000"
- "3788:3788"
- "3787:80"
networks:
- app_network
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- rabbitmq
- backend
restart: always
networks:
app_network:
With this configuration, I am not being able to connect. I've seen that, in the host, if I launch an Ipython and try the following:
# Python 3.8, ipython console
data = "Hello"
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('172.18.0.2', 5000))
s.send(data.encode())
Then this info reaches the docker successfully. But if i try to connect to either 0.0.0.0:5000, localhost:5000 or 192.168.1.141:5000, then I only get connection errors. The docker is getting the information from outside the machine, either.
What am I missing? I have web servers simmilarly configured where the docker port forwarding is successful.
Thanks in advance for your time and help.
You're creating a UDP socket (SOCK_DGRAM). Compose ports: is somewhat quiet on this feature, but the general Container networking overview implies that port forwarding defaults to TCP only.
You can explicitly specify you need to forward a UDP port:
ports:
- '5000:5000/udp'
Then you can connect to the host's DNS or IP address (or localhost or 127.0.0.1, if you're calling from outside a container but on the same host) and the first port number 5000. Do not look up the container-private IP address; it's unnecessary and doesn't work in most contexts.

Connect to an OPC UA server running inside a container

I have dockerized a simple OPC UA server. When I run it locally, I am capable of connecting to the server without problems. However, when I run the server in a Docker container the client refuses to connect. Further, when I try to set the endpoint for the server as opc.tcp://localhost:4840, the server will not bind to the address when it is ran inside a container. The endpoint opc.tcp://127.0.0.1:4840 must be used. This is not an issue when running the server locally. The following library is used to implement the server https://github.com/FreeOpcUa/python-opcua and the client used is https://github.com/FreeOpcUa/opcua-client-gui.
I have tried to set different endpoints without any luck.
The server implementation is:
from opcua import Server, ua
server = Server()
server.set_endpoint('opc.tcp://127.0.0.1:4840')
server.set_security_policy([ua.SecurityPolicyType.NoSecurity])
server.start()
try:
while True:
i = 1
finally:
server.stop()
The 'Dockerfile' exposes the following port EXPOSE 4840. The Docker run command is
docker run --rm --name server -p 4840:4840 opcua
You server in container is only listening to 127.0.0.1, hence only accepting connection from inside the container:
server.set_endpoint('opc.tcp://127.0.0.1:4840')
You should listen to all hosts such as:
server.set_endpoint('opc.tcp://0.0.0.0:4840')
you need to use --network host in your docker run command , since localhost on the conatiner is not your host

Connect a Thrift client to a Thrift server in separate docker containers on the same host

I'm trying to connect a Thrift client (client) to a Thrift server (server) on the same host; the server and client must be in separate docker containers.
I'm using the python implementation of Apache Thrift, Thriftpy v0.3.9. The host is Ubuntu 16.04, Docker is version 18.06.0-ce, and docker-compose is version 1.17.0. I'm using a python:3.6.6-alpine3.8 image.
I can successfully connect the client to the server on the same host so long as they're not containerized. However, I need them in containers.
My docker-compose.yml:
version: "3"
services:
thrift_client:
build: .
ports:
- "6002:6000"
links:
- thrift_server
thrift_server:
image: thrift_server
ports:
- "6001:6000"
The server runs successfully. However, the client makes the following exception:
"Could not connect to %s" % str(addr))
thriftpy.transport.TTransportException: TTransportException(type=1, message="Could not connect to ('thrift_server', 6000)")
I'm following this little demo linked below with only slight deviations so as to do it with docker. https://thriftpy.readthedocs.io/en/latest/
My pinpong.thrift file:
service PingPong {
string ping(),
}
thrift_server.py:
import thriftpy
pingpong_thrift = thriftpy.load("pingpong.thrift", module_name="pingpong_thrift")
from thriftpy.rpc import make_server
class Dispatcher(object):
def ping(self):
return "pong"
server = make_server(pingpong_thrift.PingPong, Dispatcher(), 'localhost', 6000)
server.serve()
thrift_client.py:
import thriftpy
pingpong_thrift = thriftpy.load("pingpong.thrift", module_name="pingpong_thrift")
from thriftpy.rpc import make_client
client = make_client(pingpong_thrift.PingPong, 'thrift_server', 6000)
client.ping()
Again, this works fine without using Docker on the host. Of course, I use 'localhost' in lieu of 'thrift_server' for the client when doing it without Docker.
The goal is to experiment making calls from the thrift client to the thrift server on the same host. Therefore, docker-compose is not necessary for this simple learning example.
First of all, the ports used in the above question are all wrong. Both the thrift client and thrift server must listen to the same port on the host.
Therefore, I replaced this line for the thrift client
client = make_client(pingpong_thrift.PingPong, 'thrift_server', 6000)
to the following:
client = make_client(pingpong_thrift.PingPong, 'localhost', 6000)
Docker bridge doesn't allow 2 or more containers to listen to the same port. Therefore, I use the host network. To my understanding you can connect to the host network on docker for Linux (not sure about Windows) but not Mac.
In any case, I simply did the following:
$ docker run -it --network=host --name thrift_server_container -p=0.0.0.0:6000:6000 thrift_server python thrift_server.py
Then in another terminal:
$ docker run -it --network=host --name thrift_client_container -p=0.0.0.0:6000:6000 thrift_client python
The second command will put you in the python repl of the client. You can then start an instance of the thrift client in the repl and ping the thrift server. Of course you could run thrift_client.py in the second command instead, but I found it easier to experiment with thrift in the repl.

Broken pipe at client on TCP connection to server inside docker on Mac OS using Python

I have a client.py sending data to (server_ip,60000). The server side code, which receives data, sits inside a docker container. The codes are in Python and the server runs on Mac OS. Before migrating to docker, I could successfully transmit data. After dockerizing the server.py code, the bind happens, but client.py at connection.sendall(out) says:
socket.error: [Errno 32] Broken pipe
Here is my docker-compose.yml:
version: '2'
services:
server:
build: ./server
ports:
- server_IP:60000:60000
and here is the binding inside server.py:
port = 60000
host = "localhost"
Any idea why this happens?
Well, I could fix it by setting the host at server side to 0.0.0.0 inside docker and removing-rebuilding the image. All works fine.

socket.error:[errno 99] cannot assign requested address : flask and python

I am having the same problem like here and here
I am Trying to run a flask app inside docker container.It works fine with '0.0.0.0' but it throws error with my ip address
I am behind a corporate proxy. When i checked my ip address with ipconfig, it showed IP Address as : 10.***.**.** And I am using docker toolbox where my container ip is 172.17.0.2 and VM IP Address is 192.168.99.100.
I have a flask app running inside this docker with host
if __name__ == "__main__":
app.run(host= '0.0.0.0')
works fine. But when i change it to my ip address
if __name__ == "__main__":
app.run(host= '10.***.**')
throws error :
socket.error:[errno 99] cannot assign requested address
I checked again the ip address with a simple flask application which is running on local (i.e without docker)
if __name__ == "__main__":
app.run(host= '10.***.**')
It worked fine.
So the problem is coming only when running inside the docker. And that's because i am behind a router running NAT with internal ip address. And how do i find this internal ip address with NAT? I have already done port forwarding for flask application with port 5000.
> iptables -t nat -A DOCKER -p tcp --dport 5000 -j DNAT --to-destination 172.17.0.2:5000
> iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source 172.17.0.2 --destination 172.17.0.2 --dport https
> iptables -A DOCKER -j ACCEPT -p tcp --destination 172.17.0.2 --dport https
To let other computers on the LAN connect to your service just use 0.0.0.0 address in app.run() function and expose desired port from your docker container to your host PC.
To expose port you need to
1) specify EXPOSE directive in Dockerfile
2) run container with -p <port_on_host>:<port_in_container> parameter.
For example:
Dockerfile:
FROM ubuntu:17.10
RUN apt-get update && apt-get install -y apache2
EXPOSE 80
ENTRYPOINT ["/usr/sbin/apache2ctl"]
CMD ["-D", "FOREGROUND"]
Build:
docker build -t image_name .
Run:
docker run -d -p 80:80 image_name
Check:
curl http://localhost
P.S. make sure that 80 port is not used by another app on your host PC before running container. If this port is already in use - specify another port, for example 8080:
docker run -d -p 8080:80 image_name
And then check:
curl http://localhost:8080
Docs are here.
The answer of OSError [Errno 99] - python applies here also.
If it works using the ip address but not using hostname.
Removing double localhost in /etc/hosts should be solution. hosts file should look something like this (mapping ip to hostname)
127.0.0.1 localhost
127.0.1.1 your_hostname_here
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Answer by #'Artsiom Praneuski' is only concerned with Docker configuration, which is all relevant in docker container setup but does not point to Python environment fix(both container and normal setup).

Categories