I have the following Dockerfile which I need to create an image and run as a kubernetes deployment
ARG PYTHON_VERSION=3.7
FROM python:${PYTHON_VERSION}
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ARG USERID
ARG USERNAME
WORKDIR /code
COPY requirements.txt ./
COPY manage.py ./
RUN pip install -r requirements.txt
RUN useradd -u "${USERID:-1001}" "${USERNAME:-jananath}"
USER "${USERNAME:-jananath}"
EXPOSE 8080
COPY . /code/
RUN pwd
RUN ls
ENV PATH="/code/bin:${PATH}"
# CMD bash
ENTRYPOINT ["/usr/local/bin/python"]
# CMD ["manage.py", "runserver", "0.0.0.0:8080"]
And I create the image, tag it and pushed to my private repository.
And I have the kubernetes manifest file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
tier: my-app
name: my-app
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
tier: my-app
template:
metadata:
labels:
tier: my-app
spec:
containers:
- name: my-app
image: "<RETRACTED>.dkr.ecr.eu-central-1.amazonaws.com/my-ecr:webv1.11"
imagePullPolicy: Always
args:
- "manage.py"
- "runserver"
- "0.0.0.0:8080"
env:
- name: HOST_POSTGRES
valueFrom:
configMapKeyRef:
key: HOST_POSTGRES
name: my-app
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
key: POSTGRES_DB
name: my-app
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
key: POSTGRES_USER
name: my-app
- name: USERID
valueFrom:
configMapKeyRef:
key: USERID
name: my-app
- name: USERNAME
valueFrom:
configMapKeyRef:
key: USERNAME
name: my-app
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: POSTGRES_PASSWORD
name: my-app
ports:
- containerPort: 8080
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 00m
memory: 1000Mi
When I run the deployment above, the pod kills everytime and when I try to see the logs, this is all I see.
exec /usr/local/bin/python: exec format error
This is a simple django python application.
What is interesting is, this is working fine with docker-compose as below:
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
web:
build:
context: .
args:
USERID: ${USERID}
USERNAME: ${USERNAME}
command: manage.py runserver 0.0.0.0:8080
volumes:
- .:/code
ports:
- "8080:8080"
environment:
- POSTGRES_NAME=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
env_file:
- .env
Can someone help me with this?
Try to inspect your image architecture using
docker image inspect <your image name>
If you see something like,
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
which is different from your cluster architecture. Then you must build your image on a machine with the same architecture as your cluster.
Related
I am deploying a Microservice on azure through GitHub actions, the pod is the CrashLoopBackOff status
here are logs command output from the Kubernetes namespace and the container is the crashbackoffloop
is there is something to be done with volumes?? per some search people are compiling about that
kubectl logs --previous --tail 10 app-dev-559d688468-8lr6n
/usr/local/bin/python: can't open file '/app/3/ON_ /Scripts/manage.py': [Errno 2] No such file or directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-dev
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: -app
image: "${DOCKER_REGISTRY}/app:${IMAGE_TAG}"
readinessProbe:
failureThreshold: 6
httpGet:
path: /
port: 8000
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
imagePullPolicy: Always
command: ["/bin/sh"]
args:
- -c
- >-
/bin/sed -i -e "s/# 'autodynatrace.wrappers.django'/'autodynatrace.wrappers.django'/" /app/T /ON_ 3/ON_ /settings.py &&
/usr/local/bin/python manage.py collectstatic --noinput &&
AUTOWRAPT_BOOTSTRAP=autodynatrace AUTODYNATRACE_FORKABLE=True /usr/local/bin/gunicorn --workers 8 --preload --timeout 120 --config gunicorn.conf.py --bind 0.0.0.0:8000
env:
- name: AUTODYNATRACE_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: AUTODYNATRACE_APPLICATION_ID
value: Django ($(AUTODYNATRACE_POD_NAME):8000)
ports:
- containerPort: 8000
volumeMounts:
- name: secrets
readOnly: true
mountPath: /root/FHIREngine/conf
- name: secrets
readOnly: true
mountPath: /home/ view/FHIREngine/conf
imagePullSecrets:
- name: docker-registry-credentials
volumes:
- name: secrets
secret:
secretName: config
defaultMode: 420
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: dev
spec:
type: NodePort
ports:
- name: app
port: 8000
targetPort: 8000
selector:
app: app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: dev
annotations:
#external-dns.alpha.kubernetes.io/hostname: .io
#external-dns.alpha.kubernetes.io/type: external
kubernetes.io/ingress.class: nginx-internal
spec:
rules:
- host: com
http:
paths:
- path: /
backend:
# serviceName: app
# servicePort: 8000
service:
name: app
port:
number: 8000
pathType: ImplementationSpecific
the same image is working fine on aws side
kubectl describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 13m (x681 over 157m) kubelet Back-off restarting failed container
Normal Pulling 3m27s (x36 over 158m) kubelet Pulling image " applicationA:latest"
let me know any ideas
Pod CrashLoopBackOff status indicates that pod startup fails repeatedly in Kubernetes.
Possible workaround I :
1)OOMKiller : You will also likely see Reason: OOM in the container in the /app/3/ON_ /Scripts/manage.py output. Check if your application needs more resources.
1)Check that your manage.py file should be located in the root directory of the project. Make sure you need to be in the directory that manage.py is in. Also check if there are any errors in the name of the file.
2)Check If the liveness probe failed, you can see a warning like ‘manage.py': [Errno 2] No such file or directory in the events output, So adjust the time for the liveness/readiness probes, See Define readiness probes for more information.
Please refer to this link for more detailed information.
Possible Workaround 2 :
Based on the warning :: reason: Backoff, message: Back-off restarting failed container
As you mentioned container restartPolicy: Always, recommending to change the container restartPolicy: OnFailure, this should mark the pod status completed once the process/command ends in the container.
Refer to Kubernetes official documentation for more information.
I am trying to deploy python fastApi in EKS cluster (I am able to test this code in my local system), whenever I am deploying the docker image, its failing with error as
"
INFO: Will watch for changes in these directories: ['/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using statreload
ERROR: Error loading ASGI app. Attribute "app" not found in module "main".
"
I have created the docker image and pushed it to local repository, during deployment I am able to pull the image but not able to create container and when I checked Pod logs I got above error message .
My main.py file content-
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
My docker file-
FROM python:3.9.5
COPY . /app
COPY .pip /root
WORKDIR /app
RUN pip3 install -r docker_req.txt
#COPY ./main.py /app/
#COPY ./__init__.py /app/
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
Deployment file looks like this-
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: cti-datalake-poc
meta.helm.sh/release-namespace: **********<replaced the name>
generation: 1
labels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cti-datalake-poc
app.kubernetes.io/version: 1.0.0
helm.sh/chart: cti-datalake-poc-1.0.0
version: 1.0.0
name: cti-datalake-poc
namespace: **********<replaced the name>
resourceVersion: "******"
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/name: cti-datalake-poc
version: 1.0.0
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/name: cti-datalake-poc
deployid: *****
version: 1.0.0
spec:
containers:
- image: develop-ctl-dl-poc/cti-datalake-poc:1.0.5.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: cti-datalake-poc
ports:
- containerPort: 5000
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ***<name removed>
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
requirement.txt is
fastapi==0.73.0
pydantic==1.9.0
uvicorn==0.17.0
Any help is appreciated.
Add the directory name in front of filename i.e; if your directory name is app
change main:app to app.main:app so in CMD of Dockerfile this will be
CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]
In addition you can check this SO post.
It'll depend on the file name - if for example it was called server.py then the string you should use is "server:app"
I had this same problem, but I was trying to run a FastAPI app with docker-compose. Turns out I had mounted my local directories wrong.
This was my docker-compose yml:
version: "3.3"
services:
fastapi_app:
build:
context: .
dockerfile: FastAPI-Dockerfile
volumes:
- ./:/code/app:z
ports:
- "80:80"
tty: true
See the error in volumes? I'm mounting the whole project directory in /code/app in the container. Since my project directory already has an /app folder in it, the only way uvicorn can find it is with
CMD ["uvicorn", "app.app.main:app", "--reload", "--host", "0.0.0.0"]
I've made a mess of my directories :(
The correct volumes mount is:
volumes:
- ./app:/code/app:z
Hope this helps someone else!
I have been trying to run a Python Django application on Kubernets but not success. The application runs fine in Docker.
This is the yaml Deployment to Kubernets:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-02-06T14:48:45Z"
generation: 1
labels:
app: keyvault
name: keyvault
namespace: default
resourceVersion: "520"
uid: ccf0e490-517f-4102-b282-2dcd71008948
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: keyvault
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: keyvault
spec:
containers:
- image: david900412/keyvault_web:latest
imagePullPolicy: Always
name: keyvault-web-5wrph
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2022-02-06T14:48:45Z"
lastUpdateTime: "2022-02-06T14:48:45Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2022-02-06T14:48:45Z"
lastUpdateTime: "2022-02-06T14:48:46Z"
message: ReplicaSet "keyvault-6944b7b468" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
This is the docker compose file I'm using to run the image in Docker:
version: "3.9"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
This is the docker file I'm using to run the image in Docker:
FROM python:3.9
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
Kubectl describe pod Output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned default/keyvault-6944b7b468-frss4 to minikube
Normal Pulled 37s kubelet Successfully pulled image "david900412/keyvault_web:latest" in 12.5095594s
Normal Pulled 33s kubelet Successfully pulled image "david900412/keyvault_web:latest" in 434.2995ms
Normal Pulling 17s (x3 over 49s) kubelet Pulling image "david900412/keyvault_web:latest"
Normal Created 16s (x3 over 35s) kubelet Created container keyvault-web-5wrph
Normal Started 16s (x3 over 35s) kubelet Started container keyvault-web-5wrph
Normal Pulled 16s kubelet Successfully pulled image "david900412/keyvault_web:latest" in 395.5345ms
Warning BackOff 5s (x4 over 33s) kubelet Back-off restarting failed container
Kubectl log pod Does not show anything :(
Thanks for your help.
This is a community wiki answer posted for better visibility. Feel free to expand it.
Based on the comments, the solution should be as shown below.
Remove volumes definition from the Compose file:
version: "3.9"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
Specify the startup command with CMD for an image in Dockerfile:
FROM python:3.9
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python3","manage.py","runserver"]
Then translate a Docker Compose file to Kubernetes resources. This can be done with using Kompose or another suitable solution.
I am using Django Cron package in a docker container where I specify my cron functions like this:
class CreateSnapshots(CronJobBase):
RUN_EVERY_MINS = 15 # every 15 mins
schedule = Schedule(run_every_mins=RUN_EVERY_MINS)
code = 'crontab.cronjobs.CreateSnapshots'
def do(self):
print("Running CreateSnapshots now..")
In my docker-entrypoint.sh for Kubernetes, I specify this:
python manage.py runcrons
The cronjob doesn't run every 15 minutes as instructed but when I do python manage.py runcrons in the bash of container, the cronjob runs immediately. Any idea why is it not running every 15 minutes as specified? or a piece of configuration I am missing?
I have specified this in my settings.py:
CRON_CLASSES = [
"crontab.cronjobs.CreateSnapshots",
]
My Kubernetes Deployment Spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.14.0 (fa706f2)
creationTimestamp: null
labels:
io.kompose.service: cronjobs
name: cronjobs
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: cronjobs
spec:
imagePullSecrets:
- name: mycompregcred
containers:
- args:
- python
- manage.py
- runserver
- 0.0.0.0:8003
image: eu.gcr.io/my-project-123/cronjobs
name: cronjobs
ports:
- containerPort: 8003
resources: {}
volumeMounts:
- mountPath: /cronjobs
name: cronjobs-claim0
- mountPath: /cronjobs/common
name: cronjobs-claim1
restartPolicy: Always
volumes:
- name: cronjobs-claim0
persistentVolumeClaim:
claimName: cronjobs-claim0
- name: cronjobs-claim1
persistentVolumeClaim:
claimName: cronjobs-claim1
status: {}
And my docker-compose.yaml's cronjob app part:
cronjobs:
image: cronjobs
build: ./cronjobs
depends_on:
- db
My full docker-entrypoint.sh looks like this:
#!/bin/sh
wait-for-it db:5432
python manage.py collectstatic --no-input
python manage.py migrate django_cron
python manage.py runcrons
gunicorn cronjobs.wsgi -b 0.0.0.0:8000
this might be a silly question but how can i get my https://localhost:5000 working through my flask kuberenetes app to ensure its returning back the right info?
This is my workflow so far:
$ eval $(minikube docker-env)
$ docker build ...
$ kubectl apply -f deploy.yaml (contains deployment & service)
$ kubectl set image...
kubectl logs... returns this below: also my pods are up and running so nothing is failing
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 443-461-677
the only thing is when i go to that address in my browser it says the site can't be reached. when i curl https://localhost:5000 or curl https://0.0.0.0:5000/ i get a failed to connect error. i feel like my enviroment/set up is wrong somehow. any tips/suggestions?
thank you!
also heres my deploy.yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30000
Dockerfile:
FROM python:2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 5000
CMD ["python", "app.py"]
As you have exposed Port 5000 in the Dockerfile, you need to expose the same Port in the container in your Deployment. After that, you need to configure your Service to use this Port.
It should look like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 5000 #<<<PORT FIXED
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 5000 #<<<PORT FIXED
targetPort: 5000
nodePort: 30000
After that, you can reach you application on <any-kubernetes-node-IP>:30000
You need to create service with label selector myapp.
But there is another way you can do curl
by logging into running pod and execute curl from inside the pod.
Just do
kubectl exec -it podname /bin/bash This will open bash shell
Then you can do curl localhost:5000