I wouldd to create a persistent volume on my kubernetes (gcp) cluster and use it in my django app as , for example, media folder.
On my kubernetes side i do:
First create a volumes claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-zeus
namespace: ctest
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
, then in my deployments.yaml i create a volume and associate to the pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: django
namespace: ctest
labels:
app: django
spec:
replicas: 3
selector:
matchLabels:
app: django
template:
metadata:
labels:
app: django
spec:
volumes:
- name: cc-volume
persistentVolumeClaim:
claimName: pvc-zeus
containers:
- name: django
image: gcr.io/direct-variety-3066123/cc-mirror
volumeMounts:
- mountPath: "/app/test-files"
name: cc-volume
...
then in my django settings:
MEDIA_URL = '/test-files/'
Here my Dockerfile:
FROM python:3.8-slim
ENV PROJECT_ROOT /app
WORKDIR $PROJECT_ROOT
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN chmod +x run.sh
CMD python manage.py runserver --settings=settings.kube 0.0.0.0:8000
when i apply volume claim on my cluster all was done (volume claim was created) but whe apply deployment.yaml no volume was created for the pods (also if i connect in bash to my pods, no folder test-files exist).
How can i create a volume on my deployments pods and use it in my django app?
So many thanks in advance
You need to have one of two Kubernetes objects in place in order to make a PVC: a PersistentVolume(PV) or a StorageClass(SC).
As you showed, your PVC does not indicate a PV or a SC from which to create a volume.
Usually, when you don't specify a PV or a SC in a PVC, a default SC will be used and you are not supposed to indicate the .resources in the PVC but in the default SC.
Maybe, if you just want to work with default SC, you would want to check if your specific cluster has it active or whether you need to create/activate it.
Related
I have been trying to run a Python Django application on Kubernets but not success. The application runs fine in Docker.
This is the yaml Deployment to Kubernets:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2022-02-06T14:48:45Z"
generation: 1
labels:
app: keyvault
name: keyvault
namespace: default
resourceVersion: "520"
uid: ccf0e490-517f-4102-b282-2dcd71008948
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: keyvault
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: keyvault
spec:
containers:
- image: david900412/keyvault_web:latest
imagePullPolicy: Always
name: keyvault-web-5wrph
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2022-02-06T14:48:45Z"
lastUpdateTime: "2022-02-06T14:48:45Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2022-02-06T14:48:45Z"
lastUpdateTime: "2022-02-06T14:48:46Z"
message: ReplicaSet "keyvault-6944b7b468" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
This is the docker compose file I'm using to run the image in Docker:
version: "3.9"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
This is the docker file I'm using to run the image in Docker:
FROM python:3.9
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
Kubectl describe pod Output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned default/keyvault-6944b7b468-frss4 to minikube
Normal Pulled 37s kubelet Successfully pulled image "david900412/keyvault_web:latest" in 12.5095594s
Normal Pulled 33s kubelet Successfully pulled image "david900412/keyvault_web:latest" in 434.2995ms
Normal Pulling 17s (x3 over 49s) kubelet Pulling image "david900412/keyvault_web:latest"
Normal Created 16s (x3 over 35s) kubelet Created container keyvault-web-5wrph
Normal Started 16s (x3 over 35s) kubelet Started container keyvault-web-5wrph
Normal Pulled 16s kubelet Successfully pulled image "david900412/keyvault_web:latest" in 395.5345ms
Warning BackOff 5s (x4 over 33s) kubelet Back-off restarting failed container
Kubectl log pod Does not show anything :(
Thanks for your help.
This is a community wiki answer posted for better visibility. Feel free to expand it.
Based on the comments, the solution should be as shown below.
Remove volumes definition from the Compose file:
version: "3.9"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
Specify the startup command with CMD for an image in Dockerfile:
FROM python:3.9
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python3","manage.py","runserver"]
Then translate a Docker Compose file to Kubernetes resources. This can be done with using Kompose or another suitable solution.
I am using helm charts to deploy my kubernetes application in local cluster minikube. I was able to mount /home/$USER/log directory and verified by creating and modifying file in the mounted directory using shell command.
#touch /log/a
# ls
a delete.cpp dm
But when I am using python to create symlink it is failing.
>>> import os
>>> os.symlink("delete.cpp", "b")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
PermissionError: [Errno 1] Operation not permitted: 'delete.cpp' -> 'b'
Any idea why symlink is not working.
I am able to use same code in different directory
To mount host directory in minikube I am using
minikube mount ~/log:/log
My deployment script looks like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: my-app
spec:
volumes:
- name: log-dir
hostPath:
path: /log
containers:
- name: my-app
image: my-image
imagePullPolicy: never #It's local image
volumeMounts:
- name: log-dir
mountPath: /log
command: [ "/bin/bash", "-ce", "./my_app_executing_symlink" ]
According to the Linux manpage on symlink(2), you'd get that error when the file system doesn't support symlinks.
EPERM The filesystem containing linkpath does not support the
creation of symbolic links.
In the case of a minikube mount, that certainly sounds possible.
If you are using minikube, you can use hostpath persistent volume that supports hostPath for development and testing on a single-node cluster.
Example usage:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/log"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-volume
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
volumes:
- name: pv0001
persistentVolumeClaim:
claimName: example-volume
containers:
- name: my-app
image: alpine
command: [ "/bin/sh", "-c", "sleep 10000" ]
volumeMounts:
- name: pv0001
mountPath: /log
After successful deployment you will be able to create symlinks in /log directory:
$ kubectl exec -it my-app -- /bin/sh
/log # touch a
/log # ln -s a pd
-rw-r--r-- 1 root root 0 Nov 25 17:49 a
lrwxrwxrwx 1 root root 1 Nov 25 17:49 pd -> a
And as mentioned in documentation:
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
I have a docker file in which I am hardcoding the env variables for now as it gets injected in the app during the build process. Now, I want to inject those during the runtime when the application gets ran in the k8s pod. I tried this but its not working. Below is my docker file. Its my first time using serious python and am not sure how to fix it.
FROM python:3.7-slim AS build
WORKDIR /app
COPY . .
RUN python3 setup.py bdist_wheel
#ENV USE_DB="True" \
# DB_USERNAME= \
# DB_HOST= \
# DB_PASSWORD= \
# DB_DB=sth
RUN pip3 install dist/app_search*.whl && \
semanticsearch-preprocess
FROM python:3.7-slim
WORKDIR /opt/srv
COPY --from=build /app/dist/app_search*.whl /opt/srv/
COPY --from=build /tmp/projects* /opt/srv/
# set environment variables to /opt/srv
ENV DICT_FILE="/opt/srv/projects.dict" \
MODEL_FILE="/opt/srv/projects.model.cpickle" \
INDEX_FILE="/opt/srv/projects.index" \
EXTERNAL_INDEX_FILE="/opt/srv/projects.mm.metadata.cpickle"
RUN pip3 install waitress && \
pip3 install app_search*.whl
EXPOSE 5000
ENTRYPOINT [ "waitress-serve" ]
CMD [ "--call", "app_search.app:main" ]
First Create k8s configmap or k8s secret(better for sensitive data) in your cluster.
Then read these values in k8s deployment yaml as env variables for pod.
official Docs: https://kubernetes.io/docs/concepts/configuration/secret/
Eg. Create secret yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: "abc-user"
password: "pwd-here"
Deployment yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
You can pass env variables to k8s pod with pod spec field: env.
Look a the following example from k8s docs:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Also take a look at k8s documentaion for more information:
k8s api spec reference
defining environment variables
using secrets as environment variables
I am using Django Cron package in a docker container where I specify my cron functions like this:
class CreateSnapshots(CronJobBase):
RUN_EVERY_MINS = 15 # every 15 mins
schedule = Schedule(run_every_mins=RUN_EVERY_MINS)
code = 'crontab.cronjobs.CreateSnapshots'
def do(self):
print("Running CreateSnapshots now..")
In my docker-entrypoint.sh for Kubernetes, I specify this:
python manage.py runcrons
The cronjob doesn't run every 15 minutes as instructed but when I do python manage.py runcrons in the bash of container, the cronjob runs immediately. Any idea why is it not running every 15 minutes as specified? or a piece of configuration I am missing?
I have specified this in my settings.py:
CRON_CLASSES = [
"crontab.cronjobs.CreateSnapshots",
]
My Kubernetes Deployment Spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.14.0 (fa706f2)
creationTimestamp: null
labels:
io.kompose.service: cronjobs
name: cronjobs
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: cronjobs
spec:
imagePullSecrets:
- name: mycompregcred
containers:
- args:
- python
- manage.py
- runserver
- 0.0.0.0:8003
image: eu.gcr.io/my-project-123/cronjobs
name: cronjobs
ports:
- containerPort: 8003
resources: {}
volumeMounts:
- mountPath: /cronjobs
name: cronjobs-claim0
- mountPath: /cronjobs/common
name: cronjobs-claim1
restartPolicy: Always
volumes:
- name: cronjobs-claim0
persistentVolumeClaim:
claimName: cronjobs-claim0
- name: cronjobs-claim1
persistentVolumeClaim:
claimName: cronjobs-claim1
status: {}
And my docker-compose.yaml's cronjob app part:
cronjobs:
image: cronjobs
build: ./cronjobs
depends_on:
- db
My full docker-entrypoint.sh looks like this:
#!/bin/sh
wait-for-it db:5432
python manage.py collectstatic --no-input
python manage.py migrate django_cron
python manage.py runcrons
gunicorn cronjobs.wsgi -b 0.0.0.0:8000
this might be a silly question but how can i get my https://localhost:5000 working through my flask kuberenetes app to ensure its returning back the right info?
This is my workflow so far:
$ eval $(minikube docker-env)
$ docker build ...
$ kubectl apply -f deploy.yaml (contains deployment & service)
$ kubectl set image...
kubectl logs... returns this below: also my pods are up and running so nothing is failing
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on https://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 443-461-677
the only thing is when i go to that address in my browser it says the site can't be reached. when i curl https://localhost:5000 or curl https://0.0.0.0:5000/ i get a failed to connect error. i feel like my enviroment/set up is wrong somehow. any tips/suggestions?
thank you!
also heres my deploy.yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 80
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 5000
nodePort: 30000
Dockerfile:
FROM python:2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 5000
CMD ["python", "app.py"]
As you have exposed Port 5000 in the Dockerfile, you need to expose the same Port in the container in your Deployment. After that, you need to configure your Service to use this Port.
It should look like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: test-space
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
ports:
- containerPort: 5000 #<<<PORT FIXED
env:
- name: "SECRET_KEY"
value: /etc/secret-volume/secret-key
- name: "SECRET_CRT"
value: /etc/secret-volume/secret-crt
volumes:
- name: secret-volume
secret:
secretName: my-secret3
---
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: test-space
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 5000 #<<<PORT FIXED
targetPort: 5000
nodePort: 30000
After that, you can reach you application on <any-kubernetes-node-IP>:30000
You need to create service with label selector myapp.
But there is another way you can do curl
by logging into running pod and execute curl from inside the pod.
Just do
kubectl exec -it podname /bin/bash This will open bash shell
Then you can do curl localhost:5000