manage.py': [Errno 2] No such file or directory - python

I am deploying a Microservice on azure through GitHub actions, the pod is the CrashLoopBackOff status
here are logs command output from the Kubernetes namespace and the container is the crashbackoffloop
is there is something to be done with volumes?? per some search people are compiling about that
kubectl logs --previous --tail 10 app-dev-559d688468-8lr6n
/usr/local/bin/python: can't open file '/app/3/ON_ /Scripts/manage.py': [Errno 2] No such file or directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-dev
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: -app
image: "${DOCKER_REGISTRY}/app:${IMAGE_TAG}"
readinessProbe:
failureThreshold: 6
httpGet:
path: /
port: 8000
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- -c
- >-
/bin/sed -i -e "s/# 'autodynatrace.wrappers.django'/'autodynatrace.wrappers.django'/" /app/T /ON_ 3/ON_ /settings.py &&
/usr/local/bin/python manage.py collectstatic --noinput &&
AUTOWRAPT_BOOTSTRAP=autodynatrace AUTODYNATRACE_FORKABLE=True /usr/local/bin/gunicorn --workers 8 --preload --timeout 120 --config gunicorn.conf.py --bind 0.0.0.0:8000
env:
- name: AUTODYNATRACE_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: AUTODYNATRACE_APPLICATION_ID
value: Django ($(AUTODYNATRACE_POD_NAME):8000)
ports:
- containerPort: 8000
volumeMounts:
- name: secrets
readOnly: true
mountPath: /root/FHIREngine/conf
- name: secrets
readOnly: true
mountPath: /home/ view/FHIREngine/conf
imagePullSecrets:
- name: docker-registry-credentials
volumes:
- name: secrets
secret:
secretName: config
defaultMode: 420
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: dev
spec:
type: NodePort
ports:
- name: app
port: 8000
targetPort: 8000
selector:
app: app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: dev
annotations:
#external-dns.alpha.kubernetes.io/hostname: .io
#external-dns.alpha.kubernetes.io/type: external
kubernetes.io/ingress.class: nginx-internal
spec:
rules:
- host: com
http:
paths:
- path: /
backend:
# serviceName: app
# servicePort: 8000
service:
name: app
port:
number: 8000
pathType: ImplementationSpecific
the same image is working fine on aws side
kubectl describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 13m (x681 over 157m) kubelet Back-off restarting failed container
Normal Pulling 3m27s (x36 over 158m) kubelet Pulling image " applicationA:latest"
let me know any ideas

Pod CrashLoopBackOff status indicates that pod startup fails repeatedly in Kubernetes.
Possible workaround I :
1)OOMKiller : You will also likely see Reason: OOM in the container in the /app/3/ON_ /Scripts/manage.py output. Check if your application needs more resources.
1)Check that your manage.py file should be located in the root directory of the project. Make sure you need to be in the directory that manage.py is in. Also check if there are any errors in the name of the file.
2)Check If the liveness probe failed, you can see a warning like ‘manage.py': [Errno 2] No such file or directory in the events output, So adjust the time for the liveness/readiness probes, See Define readiness probes for more information.
Please refer to this link for more detailed information.
Possible Workaround 2 :
Based on the warning :: reason: Backoff, message: Back-off restarting failed container
As you mentioned container restartPolicy: Always, recommending to change the container restartPolicy: OnFailure, this should mark the pod status completed once the process/command ends in the container.
Refer to Kubernetes official documentation for more information.

Related

Kubernetes: Error loading ASGI app. Attribute "app" not found in module "main"

I am trying to deploy python fastApi in EKS cluster (I am able to test this code in my local system), whenever I am deploying the docker image, its failing with error as
"
INFO: Will watch for changes in these directories: ['/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using statreload
ERROR: Error loading ASGI app. Attribute "app" not found in module "main".
"
I have created the docker image and pushed it to local repository, during deployment I am able to pull the image but not able to create container and when I checked Pod logs I got above error message .
My main.py file content-
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
My docker file-
FROM python:3.9.5
COPY . /app
COPY .pip /root
WORKDIR /app
RUN pip3 install -r docker_req.txt
#COPY ./main.py /app/
#COPY ./__init__.py /app/
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
Deployment file looks like this-
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: cti-datalake-poc
meta.helm.sh/release-namespace: **********<replaced the name>
generation: 1
labels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cti-datalake-poc
app.kubernetes.io/version: 1.0.0
helm.sh/chart: cti-datalake-poc-1.0.0
version: 1.0.0
name: cti-datalake-poc
namespace: **********<replaced the name>
resourceVersion: "******"
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/name: cti-datalake-poc
version: 1.0.0
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/name: cti-datalake-poc
deployid: *****
version: 1.0.0
spec:
containers:
- image: develop-ctl-dl-poc/cti-datalake-poc:1.0.5.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: cti-datalake-poc
ports:
- containerPort: 5000
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ***<name removed>
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
requirement.txt is
fastapi==0.73.0
pydantic==1.9.0
uvicorn==0.17.0
Any help is appreciated.
Add the directory name in front of filename i.e; if your directory name is app
change main:app to app.main:app so in CMD of Dockerfile this will be
CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]
In addition you can check this SO post.
It'll depend on the file name - if for example it was called server.py then the string you should use is "server:app"
I had this same problem, but I was trying to run a FastAPI app with docker-compose. Turns out I had mounted my local directories wrong.
This was my docker-compose yml:
version: "3.3"
services:
fastapi_app:
build:
context: .
dockerfile: FastAPI-Dockerfile
volumes:
- ./:/code/app:z
ports:
- "80:80"
tty: true
See the error in volumes? I'm mounting the whole project directory in /code/app in the container. Since my project directory already has an /app folder in it, the only way uvicorn can find it is with
CMD ["uvicorn", "app.app.main:app", "--reload", "--host", "0.0.0.0"]
I've made a mess of my directories :(
The correct volumes mount is:
volumes:
- ./app:/code/app:z
Hope this helps someone else!

PermissionError: [Errno 1] Operation not permitted: 'file.txt' -> 'symlink.txt' while using os.symlink

I am using helm charts to deploy my kubernetes application in local cluster minikube. I was able to mount /home/$USER/log directory and verified by creating and modifying file in the mounted directory using shell command.
#touch /log/a
# ls
a delete.cpp dm
But when I am using python to create symlink it is failing.
>>> import os
>>> os.symlink("delete.cpp", "b")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
PermissionError: [Errno 1] Operation not permitted: 'delete.cpp' -> 'b'
Any idea why symlink is not working.
I am able to use same code in different directory
To mount host directory in minikube I am using
minikube mount ~/log:/log
My deployment script looks like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: my-app
spec:
volumes:
- name: log-dir
hostPath:
path: /log
containers:
- name: my-app
image: my-image
imagePullPolicy: never #It's local image
volumeMounts:
- name: log-dir
mountPath: /log
command: [ "/bin/bash", "-ce", "./my_app_executing_symlink" ]
According to the Linux manpage on symlink(2), you'd get that error when the file system doesn't support symlinks.
EPERM The filesystem containing linkpath does not support the
creation of symbolic links.
In the case of a minikube mount, that certainly sounds possible.
If you are using minikube, you can use hostpath persistent volume that supports hostPath for development and testing on a single-node cluster.
Example usage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/log"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-volume
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
volumes:
- name: pv0001
persistentVolumeClaim:
claimName: example-volume
containers:
- name: my-app
image: alpine
command: [ "/bin/sh", "-c", "sleep 10000" ]
volumeMounts:
- name: pv0001
mountPath: /log
After successful deployment you will be able to create symlinks in /log directory:
$ kubectl exec -it my-app -- /bin/sh
/log # touch a
/log # ln -s a pd
-rw-r--r-- 1 root root 0 Nov 25 17:49 a
lrwxrwxrwx 1 root root 1 Nov 25 17:49 pd -> a
And as mentioned in documentation:
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner

run flask app kubernetes

this might be a silly question but how can i get my https://localhost:5000 working through my flask kuberenetes app to ensure its returning back the right info?
This is my workflow so far:
$ eval $(minikube docker-env)
$ docker build ...
$ kubectl apply -f deploy.yaml (contains deployment & service)
$ kubectl set image...
kubectl logs... returns this below: also my pods are up and running so nothing is failing
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 443-461-677
the only thing is when i go to that address in my browser it says the site can't be reached. when i curl https://localhost:5000 or curl https://0.0.0.0:5000/ i get a failed to connect error. i feel like my enviroment/set up is wrong somehow. any tips/suggestions?
thank you!
also heres my deploy.yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30000
Dockerfile:
FROM python:2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 5000
CMD ["python", "app.py"]
As you have exposed Port 5000 in the Dockerfile, you need to expose the same Port in the container in your Deployment. After that, you need to configure your Service to use this Port.
It should look like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 5000 #<<<PORT FIXED
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 5000 #<<<PORT FIXED
targetPort: 5000
nodePort: 30000
After that, you can reach you application on <any-kubernetes-node-IP>:30000
You need to create service with label selector myapp.
But there is another way you can do curl
by logging into running pod and execute curl from inside the pod.
Just do
kubectl exec -it podname /bin/bash This will open bash shell
Then you can do curl localhost:5000

Kubernetes deployment connection refused

I'm trying to deploy a simple python app to Google Container Engine:
I have created a cluster then run kubectl create -f deployment.yaml
It has been created a deployment pod on my cluster. After that i have created a service as: kubectl create -f deployment.yaml
Here's my Yaml configurations:
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: test-ctr
image: arycloud/flask-svc
ports:
- containerPort: 5000
Here's my Dockerfile:
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./app.py
deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
name: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
selector:
app: test-app
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
It creates a LoadBalancer and provides an external IP, when I open the IP it returns Connection Refused error
What's going wrong?
Help me, please!
Thank You,
Abdul
you can first check if the pod is working by curl podip:port, in your scenario, should be curl podip:8080; if not work well, you have to check if the precess is bind 8080 port in the image you are using.
if it work, then try with service by curl svcip:svcport, in your scenario, should be curl svcip:80; if not work well, will be a kubernetes networking [congiguration] issue.
if still work, then the issue should be happen on ingress layer.
In theory, it should work if all match the k8s rules.
Your deployment file doesn't have selector, it means that the service cannot find any pods to redirect the request.
Also, you must match the conteinerPOrt on deployment file with targetPort in the service file.
I've tested in my lab environment and it works for me:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: test-app
First make sure your ingress controller in running, to check that kubectl get pods -n ingress-nginx if you dont find any pods running you need to deploy the kubernetes ingress you can do that by kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml.
If you have installed ingress controller correctly, then just apply the yaml below, you need to have selector in your deployment so that the deployment can manage the replicas, apart from that you dont need to expose a node port as you are going to access your app through load balancer.
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80

access docker container in kubernetes

I have a docker container with an application exposing port 8080.
I can run it and access it on my local computer:
$ docker run -p 33333:8080 foo
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
I can test it with:
$ nc -v locahost 33333
connection succeeded!
However when I deploy it in Kubernetes it doesn't work.
Here is the manifest file:
apiVersion: v1
kind: Pod
metadata:
name: foo-pod
namespace: foo
labels:
name: foo-pod
spec:
containers:
- name: foo
image: bar/foo:latest
ports:
- containerPort: 8080
and
apiVersion: v1
kind: Service
metadata:
name: foo-service
namespace: foo
spec:
type: NodePort
ports:
- port: 8080
- NodePort: 33333
selector:
name: foo-pod
Deployed with:
$ kubectl apply -f foo.yaml
$ nc -v <publicIP> 33333
Connection refused
I don't understand why I cannot access it...
The problem was that the application was listening on IP 127.0.0.1.
It needs to listen on 0.0.0.0 to work in kubernetes. A change in the application code did the trick.

Categories