I am trying to deploy my application is a kubernetes cluster where readonlyrootfilesystem is enabled.
My application is getting deployed properly but when apache server tries to execute python3.6. It throws the following error continuously:
Current thread 0x00007f568a43ebc0 (most recent call first):
[Mon Jun 13 16:49:19.634769 2022] [core:notice] [pid 13:tid 140009663622080] AH00052: child pid 341 exit signal Aborted (6)
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ModuleNotFoundError: No module named 'encodings'
I read about this error, but couldn't find any solution yet.
I am attaching my manifest files for reference
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
name: main
spec:
containers:
- env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
envFrom:
- configMapRef:
name: myapp
image: myappimage
# command: ["tail","-f","/dev/null"]
imagePullPolicy: Always
livenessProbe:
initialDelaySeconds: 300
periodSeconds: 180
tcpSocket:
port: 8080
name: myapp
ports:
- containerPort: 8080
readinessProbe:
initialDelaySeconds: 180
periodSeconds: 60
tcpSocket:
port: 8080
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 1
memory: 2Gi
tty: true
volumeMounts:
- mountPath: /etc/apache2/conf-available
name: var-lock
- mountPath: /var/lock/apache2
name: var-lock
- mountPath: /var/log/apache2
name: var-lock
- mountPath: /mnt/log/apache2
name: var-lock
- mountPath: /var/run/apache2
name: var-lock
- mountPath: /tmp
name: tmp
# - mountPath: /usr/lib/python3.6
# name: python
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsUser: 1000
volumes:
- emptyDir: {}
name: var-lock
- emptyDir: {}
name: tmp
# - emptyDir: {}
# name: python
Please let me know if any other info is needed.
Related
I am deploying a Microservice on azure through GitHub actions, the pod is the CrashLoopBackOff status
here are logs command output from the Kubernetes namespace and the container is the crashbackoffloop
is there is something to be done with volumes?? per some search people are compiling about that
kubectl logs --previous --tail 10 app-dev-559d688468-8lr6n
/usr/local/bin/python: can't open file '/app/3/ON_ /Scripts/manage.py': [Errno 2] No such file or directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-dev
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: -app
image: "${DOCKER_REGISTRY}/app:${IMAGE_TAG}"
readinessProbe:
failureThreshold: 6
httpGet:
path: /
port: 8000
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- -c
- >-
/bin/sed -i -e "s/# 'autodynatrace.wrappers.django'/'autodynatrace.wrappers.django'/" /app/T /ON_ 3/ON_ /settings.py &&
/usr/local/bin/python manage.py collectstatic --noinput &&
AUTOWRAPT_BOOTSTRAP=autodynatrace AUTODYNATRACE_FORKABLE=True /usr/local/bin/gunicorn --workers 8 --preload --timeout 120 --config gunicorn.conf.py --bind 0.0.0.0:8000
env:
- name: AUTODYNATRACE_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: AUTODYNATRACE_APPLICATION_ID
value: Django ($(AUTODYNATRACE_POD_NAME):8000)
ports:
- containerPort: 8000
volumeMounts:
- name: secrets
readOnly: true
mountPath: /root/FHIREngine/conf
- name: secrets
readOnly: true
mountPath: /home/ view/FHIREngine/conf
imagePullSecrets:
- name: docker-registry-credentials
volumes:
- name: secrets
secret:
secretName: config
defaultMode: 420
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: dev
spec:
type: NodePort
ports:
- name: app
port: 8000
targetPort: 8000
selector:
app: app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: dev
annotations:
#external-dns.alpha.kubernetes.io/hostname: .io
#external-dns.alpha.kubernetes.io/type: external
kubernetes.io/ingress.class: nginx-internal
spec:
rules:
- host: com
http:
paths:
- path: /
backend:
# serviceName: app
# servicePort: 8000
service:
name: app
port:
number: 8000
pathType: ImplementationSpecific
the same image is working fine on aws side
kubectl describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 13m (x681 over 157m) kubelet Back-off restarting failed container
Normal Pulling 3m27s (x36 over 158m) kubelet Pulling image " applicationA:latest"
let me know any ideas
Pod CrashLoopBackOff status indicates that pod startup fails repeatedly in Kubernetes.
Possible workaround I :
1)OOMKiller : You will also likely see Reason: OOM in the container in the /app/3/ON_ /Scripts/manage.py output. Check if your application needs more resources.
1)Check that your manage.py file should be located in the root directory of the project. Make sure you need to be in the directory that manage.py is in. Also check if there are any errors in the name of the file.
2)Check If the liveness probe failed, you can see a warning like ‘manage.py': [Errno 2] No such file or directory in the events output, So adjust the time for the liveness/readiness probes, See Define readiness probes for more information.
Please refer to this link for more detailed information.
Possible Workaround 2 :
Based on the warning :: reason: Backoff, message: Back-off restarting failed container
As you mentioned container restartPolicy: Always, recommending to change the container restartPolicy: OnFailure, this should mark the pod status completed once the process/command ends in the container.
Refer to Kubernetes official documentation for more information.
I have the following Dockerfile which I need to create an image and run as a kubernetes deployment
ARG PYTHON_VERSION=3.7
FROM python:${PYTHON_VERSION}
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ARG USERID
ARG USERNAME
WORKDIR /code
COPY requirements.txt ./
COPY manage.py ./
RUN pip install -r requirements.txt
RUN useradd -u "${USERID:-1001}" "${USERNAME:-jananath}"
USER "${USERNAME:-jananath}"
EXPOSE 8080
COPY . /code/
RUN pwd
RUN ls
ENV PATH="/code/bin:${PATH}"
# CMD bash
ENTRYPOINT ["/usr/local/bin/python"]
# CMD ["manage.py", "runserver", "0.0.0.0:8080"]
And I create the image, tag it and pushed to my private repository.
And I have the kubernetes manifest file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
tier: my-app
name: my-app
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
tier: my-app
template:
metadata:
labels:
tier: my-app
spec:
containers:
- name: my-app
image: "<RETRACTED>.dkr.ecr.eu-central-1.amazonaws.com/my-ecr:webv1.11"
imagePullPolicy: Always
args:
- "manage.py"
- "runserver"
- "0.0.0.0:8080"
env:
- name: HOST_POSTGRES
valueFrom:
configMapKeyRef:
key: HOST_POSTGRES
name: my-app
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
key: POSTGRES_DB
name: my-app
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
key: POSTGRES_USER
name: my-app
- name: USERID
valueFrom:
configMapKeyRef:
key: USERID
name: my-app
- name: USERNAME
valueFrom:
configMapKeyRef:
key: USERNAME
name: my-app
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: POSTGRES_PASSWORD
name: my-app
ports:
- containerPort: 8080
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 00m
memory: 1000Mi
When I run the deployment above, the pod kills everytime and when I try to see the logs, this is all I see.
exec /usr/local/bin/python: exec format error
This is a simple django python application.
What is interesting is, this is working fine with docker-compose as below:
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
web:
build:
context: .
args:
USERID: ${USERID}
USERNAME: ${USERNAME}
command: manage.py runserver 0.0.0.0:8080
volumes:
- .:/code
ports:
- "8080:8080"
environment:
- POSTGRES_NAME=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
env_file:
- .env
Can someone help me with this?
Try to inspect your image architecture using
docker image inspect <your image name>
If you see something like,
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
which is different from your cluster architecture. Then you must build your image on a machine with the same architecture as your cluster.
I am using helm charts to deploy my kubernetes application in local cluster minikube. I was able to mount /home/$USER/log directory and verified by creating and modifying file in the mounted directory using shell command.
#touch /log/a
# ls
a delete.cpp dm
But when I am using python to create symlink it is failing.
>>> import os
>>> os.symlink("delete.cpp", "b")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
PermissionError: [Errno 1] Operation not permitted: 'delete.cpp' -> 'b'
Any idea why symlink is not working.
I am able to use same code in different directory
To mount host directory in minikube I am using
minikube mount ~/log:/log
My deployment script looks like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: my-app
spec:
volumes:
- name: log-dir
hostPath:
path: /log
containers:
- name: my-app
image: my-image
imagePullPolicy: never #It's local image
volumeMounts:
- name: log-dir
mountPath: /log
command: [ "/bin/bash", "-ce", "./my_app_executing_symlink" ]
According to the Linux manpage on symlink(2), you'd get that error when the file system doesn't support symlinks.
EPERM The filesystem containing linkpath does not support the
creation of symbolic links.
In the case of a minikube mount, that certainly sounds possible.
If you are using minikube, you can use hostpath persistent volume that supports hostPath for development and testing on a single-node cluster.
Example usage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/log"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-volume
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
volumes:
- name: pv0001
persistentVolumeClaim:
claimName: example-volume
containers:
- name: my-app
image: alpine
command: [ "/bin/sh", "-c", "sleep 10000" ]
volumeMounts:
- name: pv0001
mountPath: /log
After successful deployment you will be able to create symlinks in /log directory:
$ kubectl exec -it my-app -- /bin/sh
/log # touch a
/log # ln -s a pd
-rw-r--r-- 1 root root 0 Nov 25 17:49 a
lrwxrwxrwx 1 root root 1 Nov 25 17:49 pd -> a
And as mentioned in documentation:
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
this might be a silly question but how can i get my https://localhost:5000 working through my flask kuberenetes app to ensure its returning back the right info?
This is my workflow so far:
$ eval $(minikube docker-env)
$ docker build ...
$ kubectl apply -f deploy.yaml (contains deployment & service)
$ kubectl set image...
kubectl logs... returns this below: also my pods are up and running so nothing is failing
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 443-461-677
the only thing is when i go to that address in my browser it says the site can't be reached. when i curl https://localhost:5000 or curl https://0.0.0.0:5000/ i get a failed to connect error. i feel like my enviroment/set up is wrong somehow. any tips/suggestions?
thank you!
also heres my deploy.yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30000
Dockerfile:
FROM python:2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 5000
CMD ["python", "app.py"]
As you have exposed Port 5000 in the Dockerfile, you need to expose the same Port in the container in your Deployment. After that, you need to configure your Service to use this Port.
It should look like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 5000 #<<<PORT FIXED
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 5000 #<<<PORT FIXED
targetPort: 5000
nodePort: 30000
After that, you can reach you application on <any-kubernetes-node-IP>:30000
You need to create service with label selector myapp.
But there is another way you can do curl
by logging into running pod and execute curl from inside the pod.
Just do
kubectl exec -it podname /bin/bash This will open bash shell
Then you can do curl localhost:5000
I'm trying to deploy a simple python app to Google Container Engine:
I have created a cluster then run kubectl create -f deployment.yaml
It has been created a deployment pod on my cluster. After that i have created a service as: kubectl create -f deployment.yaml
Here's my Yaml configurations:
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: test-app
spec:
containers:
- name: test-ctr
image: arycloud/flask-svc
ports:
- containerPort: 5000
Here's my Dockerfile:
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./app.py
deployment.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
name: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 32000
selector:
app: test-app
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
It creates a LoadBalancer and provides an external IP, when I open the IP it returns Connection Refused error
What's going wrong?
Help me, please!
Thank You,
Abdul
you can first check if the pod is working by curl podip:port, in your scenario, should be curl podip:8080; if not work well, you have to check if the precess is bind 8080 port in the image you are using.
if it work, then try with service by curl svcip:svcport, in your scenario, should be curl svcip:80; if not work well, will be a kubernetes networking [congiguration] issue.
if still work, then the issue should be happen on ingress layer.
In theory, it should work if all match the k8s rules.
Your deployment file doesn't have selector, it means that the service cannot find any pods to redirect the request.
Also, you must match the conteinerPOrt on deployment file with targetPort in the service file.
I've tested in my lab environment and it works for me:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: test-app
First make sure your ingress controller in running, to check that kubectl get pods -n ingress-nginx if you dont find any pods running you need to deploy the kubernetes ingress you can do that by kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml.
If you have installed ingress controller correctly, then just apply the yaml below, you need to have selector in your deployment so that the deployment can manage the replicas, apart from that you dont need to expose a node port as you are going to access your app through load balancer.
apiVersion: v1
kind: Service
metadata:
name: test-app
labels:
app: test-app
spec:
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-app
name: test-app
spec:
selector:
matchLabels:
app: test-app
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: arycloud/flask-svc
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80