Issue in communicating two pods with each other deployed on two VMs - python

I have deployed two pods on two different Virtual machines. (master and node).
Dockerfile server-side
EXPOSE 8080
CMD ["python", "model.py"]
Server-side python code
sock = socket()
sock.bind(('',8080))
Client-side python code
sock= socket()
sock.connect(('192.168.56.105', 8080))
Pod deployment file server
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: tensor-keras
image: tensor-keras:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
Pod deployment file server
apiVersion: v1
kind: Pod
metadata:
name: client
labels:
app: client
spec:
containers:
- name: tensor
image: tensor:latest
Exposing node port
kubectl expose pod server --type=NodePort
kubectl expose pod client --port 27017 --type=NodePort
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client NodePort 10.105.180.221 <none> 27017:31161/TCP 11s
server NodePort 10.106.22.209 <none> 8080:32284/TCP 35s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
I then run the below command for the server and client, it says its waiting for the client to connect and it doesn't gets connected.
kubectl exec -it server -- /bin/sh
But the moment I type the below command Its get connected but I receive an error connection reset by peer.
curl -v localhost:32284
* Rebuilt URL to: localhost:32284/
* Connected to localhost (127.0.0.1) port 32284 (#0)
> GET / HTTP/1.1
> Host: localhost:32284
> User-Agent: curl/7.58.0
> Accept: */*
predictions_result.npy
141295744
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 13880)
* stopped the pause stream!
* Closing connection 0
Error
Traceback (most recent call last):File "model.py", line 51, in <module> client.sendall(data)
ConnectionResetError: [Errno 104] Connection reset by peer
Thank you very much for helping me with this, help is highly appreciated. I am stuck I have tried port forwarding but it's not working.

Related

how to connect application running on localhost from docker container

I am trying to connect the application (which is not running on docker)
i am trying to run this docker image with the help of docker compose.
i am using host network mode connecting external services on
host.docker.internal on port 7497
i am trying to call from the docker container from the python code
this docker is not having port config
services:
ibkr-bot-eminisp500:
container_name: ibkr-bot-eminisp500
image: |my-image|
network_mode: host
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- IBKR_CLIENT_URL_KEY= "host.docker.internal"
- IBKR_PORT_KEY=7497
but i am getting following error. what i am missing
| API connection failed: gaierror(-2, 'Name or service not known')
ibkr-bot-eminisp500 | Traceback (most recent call last):
ibkr-bot-eminisp500 | File "/usr/bin/src/app/main.py", line 8, in <module>
ibkr-bot-eminisp500 | ibkrBot = IBKRBot()
Combining host.docker.internal with network_mode: host doesn't make any sense.
If you're running under Linux, then when using network_mode: host your container is running your host's main network environment. Drop the extra_hosts section from your config because it isn't doing you any good. You can connect to a service on your host using any ip address from any host interface, including 127.0.0.1.
If you are running on anything other than Linux, then network_mode: host is probably never useful (because the Docker "host" is actually a virtual machine running on top of your primary operating system). In this case, drop network_mode: host from your config, and connect using host.docker.internal.

PostgresSQL connection refused on docker container in same server

I have postgresSQL database running docker on server when i spin up another container for django app and trying to connect postgress getting connection error. any idea?
django.db.utils.OperationalError: connection to server at "localhost" (127.0.0.1), port 6545 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 6545 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
DB docker file
container_name: pg-docker
ports:
- "6545:5432"
volumes:
- ./data:/var/lib/postgresql/data
networks:
- default
Django docker file
version: "3.9"
services:
django_api:
build:
context: ./app
dockerfile: Dockerfile
container_name: api-dev
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
networks:
- default
As #JustLudo said in the Comments, you have to address postgres with the container name "pg-docker". Localhost would be your django container.
In general, if you use multiple docker containers you should not use localhost. Instead treat every container as a standalone server and address via DNS / container_name.

docker-compose and connection to Mongo container

I am trying to create 2 containers as per the following docker-compose.yml file. The issue is that if I start up the mongo database container and then run my code locally (hitting 127.0.0.1) then everything is fine but if I try and run my api container and hit that (see yml file) then I get connection refused i.e.
172.29.0.12:27117: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id:
60437a460a3e0fa904650e35, topology_type: Single, servers: [<ServerDescription ('172.29.0.12', 27117) server_type:
Unknown, rtt: None, error=AutoReconnect('172.29.0.12:27117: [Errno 111] Connection refused')>]>
Please note: I have set mongo to use port 27117 rather than 27017
My app is a Python Flask app and I am using PyMongo in the following manner:
try:
myclient = pymongo.MongoClient('mongodb://%s:%s#%s:%s/%s' % (username, password, hostName, port, database))
mydb = myclient[database]
cursor = mydb["temperatures"]
app.logger.info('Database connected to: ' + database)
except:
app.logger.error('Error connecting to database')
What's driving me mad is it runs locally and successfully accesses mongo via the container, but as soon as I try the app in a container it fails.
docker-compose.yml as follows:
version: '3.7'
services:
hotbin-db:
image: mongo
container_name: hotbin-db
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- '27117:27017'
expose:
# Opens port 3306 on the container
- '27117'
command: [--auth]
environment:
MONGO_INITDB_ROOT_USERNAME: ***
MONGO_INITDB_ROOT_PASSWORD: ***
MONGO_INITDB_DATABASE: ***
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
# Where our data will be persisted
volumes:
- /home/simon/mongodb/database/hotbin-db/:/data/db
#- my-db:/var/lib/mysql
# env_file:
# - .env
networks:
hotbin-net:
ipv4_address: 172.29.0.12
hotbin-api:
image: scsherlock/compost-api:latest
container_name: hotbin-api
environment:
MONGODB_DATABASE: ***
MONGODB_USERNAME: ***
MONGODB_PASSWORD: ***
MONGODB_HOSTNAME: 172.29.0.12
MONGODB_PORT: '27117'
depends_on:
- hotbin-db
restart: always
ports:
# <Port exposed> : < MySQL Port running inside container>
- '5050:5050'
expose:
- '5050'
networks:
hotbin-net:
ipv4_address: 172.29.0.13
# # Names our volume
volumes:
my-db:
networks:
hotbin-net:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.29.0.0/16
Using the service name of the mongo container and the standard port of
27017 instead of 27117 (even though that's what is defined in the
docker-compose file) works. I'd like to understand why though
Your docker compose file does NOT configure MongoDB to run on port 27117. If you want to get it to run on 27117 you would have to change this line in the docker compose:
command: mongod --auth --port 27117
As you haven't specified a port, MongoDB will run on the default port 27017.
Your expose section exposes the container port 27117 to the host, but Mongo isn't running on that port, so that line is effectively doing nothing.
Your ports section maps a host port 27117 to a container port 27017. This means if you're connecting from the host, you can connect on port 27117, but that is connecting to port 27017 on the container.
Now to your python program. As this is running in the container network, to connect services within a docker-compose network, you reference them by their service name.
Putting this together, your connection string will be: mongodb://hotbin-db:27017/yourdb?<options>
As others have mentioned, you really don't need to create specific IP addresses unless you have a very good need to. You also don't even to define a network, as docker-compose creates it's own internal network.
Reference: https://docs.docker.com/compose/networking/
Are you using Windows to run the container?
If yes,localhost is identified as localhost of the container and not the localhost of your host machine.
Hence, instead of providing the IP address of your host, try modifying your mongodB string this way when running inside the docker container:
Try this:
mongodb://host.docker.internal:27017/
instead of:
mongodb://localhost:27017/

docker-compose: redis connection refused between containers

I am trying to setup a docker-compose file that is intended to replace a single Docker container solution that runs several processes (RQ worker, RQ dashboard and a Flask application) with Supervisor.
The host system is a Debian 8 Linux and my docker-compose.yml looks like this (I deleted all other entries to reduce error sources):
version: '2'
services:
redis:
image: redis:latest
rq-worker1:
build: .
command: /usr/local/bin/rqworker boo-uploads
depends_on:
- redis
"rq-worker1" is a Python RQ worker, trying to connect to redis via localhost and port 6379, but it fails to establish a connection:
redis_1 | 1:M 23 Dec 13:06:26.285 * The server is now ready to accept connections on port 6379
rq-worker1_1 | [2016-12-23 13:06] DEBUG: worker: Registering birth of worker d5cb16062fc0.1
rq-worker1_1 | Error 111 connecting to localhost:6379. Connection refused.
galileoqueue_rq-worker1_1 exited with code 1
The output of docker ps looks like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36cac91670d2 redis:latest "docker-entrypoint.sh" 14 minutes ago Up About a minute 6379/tcp galileoqueue_redis_1
I tried everything from running the RQ worker against the local IPs 0.0.0.0 / 127.0.0.1 and even localhost. Other solutions posted on Stackoverflow didn't work for me, too (docker-compose: connection refused between containers, but service accessible from host e.g.).
And this is my docker info output:
Containers: 25
Running: 1
Paused: 0
Stopped: 24
Images: 485
Server Version: 1.12.5
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 436
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 13.61 GiB
Name: gp-pc-201
ID: LBGV:K26G:UXXI:BWRH:OYVE:OQTA:N7LQ:I4DV:BTNH:FZEW:7XDD:WOCU
Does anyone have an idea why the connect between the two containers doesn't work?
In your code localhost from rq-worker1 is rq-worker1 itself, not redis and you can't reach redis:6379 by connect to localhost from rq-worker1. But by default redis and rq-worker1 are in the same network and you can use service name as a domain name in that network.
It means, that you can connect to redis service from rq-worker1 using redis as a domain name, for instance: client.connect(("redis", 6379))
You should replace localhost with redis in config of rq-worker1.

access docker container in kubernetes

I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:
$ docker run -p 33333:8080 foo
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
I can test it with:
$ nc -v locahost 33333
connection succeeded!
However when I deploy it in Kubernetes it doesn't work.
Here is the manifest file:
apiVersion: v1
kind: Pod
metadata:
name: foo-pod
namespace: foo
labels:
name: foo-pod
spec:
containers:
- name: foo
image: bar/foo:latest
ports:
- containerPort: 8080
and
apiVersion: v1
kind: Service
metadata:
name: foo-service
namespace: foo
spec:
type: NodePort
ports:
- port: 8080
- NodePort: 33333
selector:
name: foo-pod
Deployed with:
$ kubectl apply -f foo.yaml
$ nc -v <publicIP> 33333
Connection refused
I don't understand why I cannot access it...
The problem was that the application was listening on IP 127.0.0.1.
It needs to listen on 0.0.0.0 to work in kubernetes. A change in the application code did the trick.

Categories