access docker container in kubernetes - python

I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:
$ docker run -p 33333:8080 foo
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
I can test it with:
$ nc -v locahost 33333
connection succeeded!
However when I deploy it in Kubernetes it doesn't work.
Here is the manifest file:
apiVersion: v1
kind: Pod
metadata:
name: foo-pod
namespace: foo
labels:
name: foo-pod
spec:
containers:
- name: foo
image: bar/foo:latest
ports:
- containerPort: 8080
and
apiVersion: v1
kind: Service
metadata:
name: foo-service
namespace: foo
spec:
type: NodePort
ports:
- port: 8080
- NodePort: 33333
selector:
name: foo-pod
Deployed with:
$ kubectl apply -f foo.yaml
$ nc -v <publicIP> 33333
Connection refused
I don't understand why I cannot access it...

The problem was that the application was listening on IP 127.0.0.1.
It needs to listen on 0.0.0.0 to work in kubernetes. A change in the application code did the trick.

Related

manage.py': [Errno 2] No such file or directory

I am deploying a Microservice on azure through GitHub actions, the pod is the CrashLoopBackOff status
here are logs command output from the Kubernetes namespace and the container is the crashbackoffloop
is there is something to be done with volumes?? per some search people are compiling about that
kubectl logs --previous --tail 10 app-dev-559d688468-8lr6n
/usr/local/bin/python: can't open file '/app/3/ON_ /Scripts/manage.py': [Errno 2] No such file or directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-dev
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: -app
image: "${DOCKER_REGISTRY}/app:${IMAGE_TAG}"
readinessProbe:
failureThreshold: 6
httpGet:
path: /
port: 8000
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- -c
- >-
/bin/sed -i -e "s/# 'autodynatrace.wrappers.django'/'autodynatrace.wrappers.django'/" /app/T /ON_ 3/ON_ /settings.py &&
/usr/local/bin/python manage.py collectstatic --noinput &&
AUTOWRAPT_BOOTSTRAP=autodynatrace AUTODYNATRACE_FORKABLE=True /usr/local/bin/gunicorn --workers 8 --preload --timeout 120 --config gunicorn.conf.py --bind 0.0.0.0:8000
env:
- name: AUTODYNATRACE_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: AUTODYNATRACE_APPLICATION_ID
value: Django ($(AUTODYNATRACE_POD_NAME):8000)
ports:
- containerPort: 8000
volumeMounts:
- name: secrets
readOnly: true
mountPath: /root/FHIREngine/conf
- name: secrets
readOnly: true
mountPath: /home/ view/FHIREngine/conf
imagePullSecrets:
- name: docker-registry-credentials
volumes:
- name: secrets
secret:
secretName: config
defaultMode: 420
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: dev
spec:
type: NodePort
ports:
- name: app
port: 8000
targetPort: 8000
selector:
app: app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: dev
annotations:
#external-dns.alpha.kubernetes.io/hostname: .io
#external-dns.alpha.kubernetes.io/type: external
kubernetes.io/ingress.class: nginx-internal
spec:
rules:
- host: com
http:
paths:
- path: /
backend:
# serviceName: app
# servicePort: 8000
service:
name: app
port:
number: 8000
pathType: ImplementationSpecific
the same image is working fine on aws side
kubectl describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 13m (x681 over 157m) kubelet Back-off restarting failed container
Normal Pulling 3m27s (x36 over 158m) kubelet Pulling image " applicationA:latest"
let me know any ideas
Pod CrashLoopBackOff status indicates that pod startup fails repeatedly in Kubernetes.
Possible workaround I :
1)OOMKiller : You will also likely see Reason: OOM in the container in the /app/3/ON_ /Scripts/manage.py output. Check if your application needs more resources.
1)Check that your manage.py file should be located in the root directory of the project. Make sure you need to be in the directory that manage.py is in. Also check if there are any errors in the name of the file.
2)Check If the liveness probe failed, you can see a warning like ‘manage.py': [Errno 2] No such file or directory in the events output, So adjust the time for the liveness/readiness probes, See Define readiness probes for more information.
Please refer to this link for more detailed information.
Possible Workaround 2 :
Based on the warning :: reason: Backoff, message: Back-off restarting failed container
As you mentioned container restartPolicy: Always, recommending to change the container restartPolicy: OnFailure, this should mark the pod status completed once the process/command ends in the container.
Refer to Kubernetes official documentation for more information.

Minikube run flask docker fail with ERR_CONNECTION_RESET

I am new in Kubernetes, and I want to run the simple flask program on docker in Kubernetes. The image in docker could work successfully, but when I start the K8s.yaml with kubectl apply -f k8s.yaml and execute minikube service flask-app-service the web result reply fail with ERR_CONNECTION_REFUSED, and pods status Error: ErrImageNeverPull.
app.py:
# flask_app/app/app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello, World!"
if __name__ == '__main__':
app.debug = True
app.run(debug=True, host='0.0.0.0')
Dockerfile:
FROM python:3.9
RUN mkdir /app
WORKDIR /app
ADD ./app /app/
RUN pip install -r requirement.txt
EXPOSE 5000
CMD ["python", "/app/app.py"]
K8s.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app
spec:
selector:
matchLabels:
app: flask-app
replicas: 3
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: flask_app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
After deploying I try to connect to http://127.0.0.1:51145 from a browser, but it fails to connect with an ERR_CONNECTION_REFUSED message. I have a screenshot showing a more detailed Chinese-language error message if that detail is helpful.
update:
After switch imagePullPolicy from never to Always or IfNotPresent, the pod still can't run
I try the docker images command it show the image exist:
But when I pull image with docker pull, it show me the error:
After docker login still not work:
p.s. I follow the website to pratice: https://lukahuang.com/running-flask-on-minikube/
Based on the error in the question:
pods status Error: ErrImageNeverPull.
pod doesn't start because you have imagePullPolicy: Never in your deployment manifest. Which means that if the image is missing, it won't be pulled anyway.
This is from official documentation:
The imagePullPolicy for a container and the tag of the image affect
when the kubelet attempts to pull (download) the specified image.
You need to switch it to IfNotPresent or Always.
See more in image pull policy.
After everything is done correctly, pod status should be running and then you can connect to the pod and get the response back. See the example output:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu 1/1 Running 0 4d
Why are you using the same port for all the containers? I don't think that will work. You need to assign different ports to each containers or better still create different instances like 3737:5000, 2020:5000 so the external port can be anything while the internal port remains as 5000. That should work I think

Issue in communicating two pods with each other deployed on two VMs

I have deployed two pods on two different Virtual machines. (master and node).
Dockerfile server-side
EXPOSE 8080
CMD ["python", "model.py"]
Server-side python code
sock = socket()
sock.bind(('',8080))
Client-side python code
sock= socket()
sock.connect(('192.168.56.105', 8080))
Pod deployment file server
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: tensor-keras
image: tensor-keras:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
Pod deployment file server
apiVersion: v1
kind: Pod
metadata:
name: client
labels:
app: client
spec:
containers:
- name: tensor
image: tensor:latest
Exposing node port
kubectl expose pod server --type=NodePort
kubectl expose pod client --port 27017 --type=NodePort
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client NodePort 10.105.180.221 <none> 27017:31161/TCP 11s
server NodePort 10.106.22.209 <none> 8080:32284/TCP 35s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
I then run the below command for the server and client, it says its waiting for the client to connect and it doesn't gets connected.
kubectl exec -it server -- /bin/sh
But the moment I type the below command Its get connected but I receive an error connection reset by peer.
curl -v localhost:32284
* Rebuilt URL to: localhost:32284/
* Connected to localhost (127.0.0.1) port 32284 (#0)
> GET / HTTP/1.1
> Host: localhost:32284
> User-Agent: curl/7.58.0
> Accept: */*
predictions_result.npy
141295744
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 13880)
* stopped the pause stream!
* Closing connection 0
Error
Traceback (most recent call last):File "model.py", line 51, in <module> client.sendall(data)
ConnectionResetError: [Errno 104] Connection reset by peer
Thank you very much for helping me with this, help is highly appreciated. I am stuck I have tried port forwarding but it's not working.

run flask app kubernetes

this might be a silly question but how can i get my https://localhost:5000 working through my flask kuberenetes app to ensure its returning back the right info?
This is my workflow so far:
$ eval $(minikube docker-env)
$ docker build ...
$ kubectl apply -f deploy.yaml (contains deployment & service)
$ kubectl set image...
kubectl logs... returns this below: also my pods are up and running so nothing is failing
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 443-461-677
the only thing is when i go to that address in my browser it says the site can't be reached. when i curl https://localhost:5000 or curl https://0.0.0.0:5000/ i get a failed to connect error. i feel like my enviroment/set up is wrong somehow. any tips/suggestions?
thank you!
also heres my deploy.yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30000
Dockerfile:
FROM python:2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 5000
CMD ["python", "app.py"]
As you have exposed Port 5000 in the Dockerfile, you need to expose the same Port in the container in your Deployment. After that, you need to configure your Service to use this Port.
It should look like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 5000 #<<<PORT FIXED
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 5000 #<<<PORT FIXED
targetPort: 5000
nodePort: 30000
After that, you can reach you application on <any-kubernetes-node-IP>:30000
You need to create service with label selector myapp.
But there is another way you can do curl
by logging into running pod and execute curl from inside the pod.
Just do
kubectl exec -it podname /bin/bash This will open bash shell
Then you can do curl localhost:5000

Kubernetes deployment connection refused

I'm trying to deploy a simple python app to Google Container Engine:
I have created a cluster then run kubectl create -f deployment.yaml
It has been created a deployment pod on my cluster. After that i have created a service as: kubectl create -f deployment.yaml
Here's my Yaml configurations:
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: test-ctr
image: arycloud/flask-svc
ports:
- containerPort: 5000
Here's my Dockerfile:
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./app.py
deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
name: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
selector:
app: test-app
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
It creates a LoadBalancer and provides an external IP, when I open the IP it returns Connection Refused error
What's going wrong?
Help me, please!
Thank You,
Abdul
you can first check if the pod is working by curl podip:port, in your scenario, should be curl podip:8080; if not work well, you have to check if the precess is bind 8080 port in the image you are using.
if it work, then try with service by curl svcip:svcport, in your scenario, should be curl svcip:80; if not work well, will be a kubernetes networking [congiguration] issue.
if still work, then the issue should be happen on ingress layer.
In theory, it should work if all match the k8s rules.
Your deployment file doesn't have selector, it means that the service cannot find any pods to redirect the request.
Also, you must match the conteinerPOrt on deployment file with targetPort in the service file.
I've tested in my lab environment and it works for me:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: test-app
First make sure your ingress controller in running, to check that kubectl get pods -n ingress-nginx if you dont find any pods running you need to deploy the kubernetes ingress you can do that by kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml.
If you have installed ingress controller correctly, then just apply the yaml below, you need to have selector in your deployment so that the deployment can manage the replicas, apart from that you dont need to expose a node port as you are going to access your app through load balancer.
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80

Categories