Is there a simple way to create a configuration object for the Python Kubernetes client by passing a variable containing the YAML of the kubeconfig?
It's fairly easy to do something like:
from kubernetes import client, config, watch
def main():
config.load_kube_config()
or
from kubernetes import client, config, watch
def main():
config.load_incluster_config()
But I will like to create the config based on a variable with the YAML kubeconfig, Let's say I have:
k8s_config = yaml.safe_load('''
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://asdf.asdf:443
name: cluster
contexts:
- context:
cluster: cluster
user: admin
name: admin
current-context: admin
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tVGYUZiL2sxZlRFTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgU0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
''')
And I will like to load it as:
config.KubeConfigLoader(k8s_config)
The reason for this is that I can't store the content of the kubeconfig before loading the config.
The error I'm receiving is: "Error: module 'kubernetes.config' has no attribute 'KubeConfigLoader'"
They don't include KubeConfigLoader in the "pull up" inside config/__init__.py, which is why your kubernetes.config.KubeConfigLoader reference isn't working. You will have to reach into the implementation package and reference the class specifically:
from kubernetes.config.kube_config import KubeConfigLoader
k8s_config = yaml.safe_load('''...''')
config = KubeConfigLoader(
config_dict=k8s_config,
config_base_path=None)
Be aware that unlike most of my answers, I didn't actually run this one, but that's the theory
Related
I would like to be able to access GKE (kubernetes) cluster in GCP from python kubernetes client.
I cant authenticate and connect to my cluster and i dont find the reason.
Here is what i tried so far.
from google.auth import compute_engine
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
def test_gke(request):
project_id = "myproject"
zone = "myzone"
cluster_id = "mycluster"
credentials = compute_engine.Credentials()
cluster_manager_client = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager_client.get_cluster(name=f'projects/{project_id}/locations/{zone}/clusters/{cluster_id}')
configuration = client.Configuration()
configuration.host = f"https://{cluster.endpoint}:443"
configuration.verify_ssl = False
configuration.api_key = {"authorization": "Bearer " + credentials.token}
client.Configuration.set_default(configuration)
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
pods = v1.list_pod_for_all_namespaces(watch=False)
for i in pods.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
I'd like to get the configuration working I have it work where, the code is running off cluster and it produces the kubectl config file for itself. (see update at end)
Original
The first solution assumes (!) you've the cluster configured in your local (~/.kube/config and probably adjusted by KUBE_CONFIG) config.
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client,config
config.load_kube_config()
api_instance = client.CoreV1Api()
resp = api_instance.list_pod_for_all_namespaces()
for i in resp.items:
print(f"{i.status.pod_ip}\t{i.metadata.namespace}\t{i.metadata.name}")
NOTE
Assumes you've run gcloud containers clusters get-credentials to set the ~/.kube/config file for the current cluster (and has a current-context set.
Uses your user credentials in the ~/.kube/config file so no additional credentials are needed.
Update
Okay, I have it working. Here's the code that will generate a kubectl config and connect to the cluster. This code uses Application Default Credentials to provide a Service Account key to the code (usually export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json)
import os
import google.auth
import base64
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client,config
from ruamel import yaml
PROJECT = os.getenv("PROJECT")
ZONE = os.getenv("ZONE")
CLUSTER = os.getenv("CLUSTER")
# Get Application Default Credentials
# `project_id` is the Service Account's
# This may differ to the cluster's `PROJECT`
credentials, project_id = google.auth.default()
# Get the cluster config from GCP
cluster_manager_client = ClusterManagerClient(credentials=credentials)
name=f"projects/{PROJECT}/locations/{ZONE}/clusters/{CLUSTER}"
cluster = cluster_manager_client.get_cluster(name=name)
SERVER = cluster.endpoint
CERT = cluster.master_auth.cluster_ca_certificate
configuration = client.Configuration()
# Create's a `kubectl` config
NAME="freddie" # arbitrary
CONFIG=f"""
apiVersion: v1
kind: Config
clusters:
- name: {NAME}
cluster:
certificate-authority-data: {CERT}
server: https://{SERVER}
contexts:
- name: {NAME}
context:
cluster: {NAME}
user: {NAME}
current-context: {NAME}
users:
- name: {NAME}
user:
auth-provider:
name: gcp
config:
scopes: https://www.googleapis.com/auth/cloud-platform
"""
# The Python SDK doesn't directly support providing a dict
# See: https://github.com/kubernetes-client/python/issues/870
kubeconfig = yaml.safe_load(CONFIG)
loader = config.kube_config.KubeConfigLoader(kubeconfig)
loader.load_and_set(configuration)
api_client= client.ApiClient(configuration)
api_instance = client.CoreV1Api(api_client)
# Enumerate e.g. Pods
resp = api_instance.list_pod_for_all_namespaces()
for i in resp.items:
print(f"{i.status.pod_ip}\t{i.metadata.namespace}\t{i.metadata.name}")
I created a lambda function using serverless in a private subnets of the non default VPC. I wanted to restart the app server of elasticbeanstalk application at a schedule time. I used boto3 and here is the reference [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html][1]
The problem is that when i run the function locally it runs and restart the application server. But when i deploy using sls deploy, it is not working and i get null response back when i test it from the lambda console.
Here is the code:
import json
from logging import log
from loguru import logger
import boto3
from datetime import datetime
import pytz
def main(event, context):
try:
client = boto3.client("elasticbeanstalk", region_name="us-west-1")
applications = client.describe_environments()
current_hour = datetime.now(pytz.timezone("US/Eastern")).hour
for env in applications["Environments"]:
applicationname = env["EnvironmentName"]
if applicationname == "xxxxx-xxx":
response = client.restart_app_server(
EnvironmentName=applicationname,
)
logger.info(response)
print("restarted the application")
return {"statusCode": 200, "body": json.dumps("restarted the instance")}
except Exception as e:
logger.exception(e)
if __name__ == "__main__":
main("", "")
Here the serverless.yml file:
service: beanstalk-starter
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
profile: xxxx-admin
region: us-west-1
memorySize: 512
timeout: 15
vpc:
securityGroupIds:
- sg-xxxxxxxxxxx (open on all ports for inbound)
subnetIds:
- subnet-xxxxxxxxxxxxxxxx (private)
- subnet-xxxxxxxxxxxxxxxx (private)
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
functions:
main:
handler: handler.main
events:
- schedule: rate(1 minute)
Response from lambda console:
The area below shows the result returned by your function execution. Learn more about returning results from your function.
null
Any help would be appreciated! Let me know what I'm missing here!
To solve this, I have to give these two permissions to my AWS lambda role from the AWS management console. You can also set the permission in the serverless.yml file.
AWSLambdaVPCAccessExecutionRole
AWSCodePipeline_FullAccess
(*Make sure you are using the least privileges while giving permission to a role.)
Thank you.
How to push my app (using python-flask + redis) to gcr.io and deploy to google kubernetes (by yaml file)?
And I want to set env variable for my app
import os
import redis
from flask import Flask
from flask import request, redirect, render_template, url_for
from flask import Response
app = Flask(__name__)
redis_host = os.environ['REDIS_HOST']
app.redis = redis.StrictRedis(host=redis_host, port=6379, charset="utf-8", decode_responses=True)
# Be super aggressive about saving for the development environment.
# This says save every second if there is at least 1 change. If you use
# redis in production you'll want to read up on the redis persistence
# model.
app.redis.config_set('save', '1 1')
#app.route('/', methods=['GET', 'POST'])
def main_page():
if request.method == 'POST':
app.redis.lpush('entries', request.form['entry'])
return redirect(url_for('main_page'))
else:
entries = app.redis.lrange('entries', 0, -1)
return render_template('main.html', entries=entries)
#Router my app by post and redirect to mainpage
#app.route('/clear', methods=['POST'])
def clear_entries():
app.redis.ltrim('entries', 1, 0)
return redirect(url_for('main_page'))
#use for docker on localhost
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
Posting this answer as a community wiki to set more of a baseline approach to the question rather than to give a specific solution addressing the code included in the question.
Feel free to edit/expand.
This topic could be quite wide considering the fact it could be addressed in many different ways (as described in the question, by using Cloud Build, etc).
Addressing this question specifically on the part of:
Building the image and sending it to GCR.
Using newly built image in GKE.
Building the image and sending it to GCR.
Assuming that your code and your whole Docker image is running correctly, you can build/tag it in a following manner to then send it to GCR:
gcloud auth configure-docker
adds the Docker credHelper entry to Docker's
configuration file, or creates the file if it doesn't exist. This will
register gcloud as the credential helper for all Google-supported
Docker registries.
docker tag YOUR_IMAGE gcr.io/PROJECT_ID/IMAGE_NAME
docker push gcr.io/PROJECT_ID/IMAGE_NAME
After that you can go to the:
GCP Cloud Console (Web UI) -> Container Registry
and see the image you've uploaded.
Using newly built image in GKE
To run earlier mentioned image you can either:
Create the Deployment in the Cloud Console (Kubernetes Engine -> Workloads -> Deploy)
A side note!
You can also add there the environment variables of your choosing (as pointed in the question)
Create it with a YAML manifest that will be similar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: amazing-app
labels:
app: amazing-app
spec:
replicas: 3
selector:
matchLabels:
app: amazing-app
template:
metadata:
labels:
app: amazing-app
spec:
containers:
- name: amazing-app
image: gcr.io/PROJECT-ID/IMAGE-NAME # <-- IMPORTANT!
env:
- name: DEMO_GREETING
value: "Hello from the environment"
Please take a specific look on following part:
env:
- name: DEMO_GREETING
value: "Hello from the environment"
This part will create an environment variable inside of each container:
$ kubectl exec -it amazing-app-6db8d7478b-4gtxk -- /bin/bash -c 'echo $DEMO_GREETING'
Hello from the environment
Additional resources:
Cloud.google.com: Container registry: Docs: Pushing and pulling
Cloud.google.com: Build: Docs: Deploying builds: Deploy GKE
In python k8s client, i use below code
yaml file
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
namespace: test
spec:
volumeSnapshotClassName: snapshotclass
source:
persistentVolumeClaimName: test-pvc
python code
res = utils.create_from_dict(k8s_client, yaml_file)
However, i got this message:
AttributeError: module 'kubernetes.client' has no attribute 'SnapshotStorageV1Api'
I want to take a volumesnapshot in k8s.
How can i do that?
Please give me some advices!
As I pointed in part of the comment I made under the question:
Have you seen this github issue comment: github.com/kubernetes-client/python/issues/…?
The link posted in the comments is a github issue for:
VolumeSnapshot,
VolumeSnapshotClass,
VolumeSnapshotContent
support in the Kubernetes python client.
Citing the comment made in this github issue by user #roycaihw that explains a way how you can make a snapshot:
It looks like those APIs are CRDs: https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/#the-volumesnapshotclass-resource
If that's the case, you could use the CustomObject API to send requests to those APIs once they are installed. Example: https://github.com/kubernetes-client/python/blob/master/examples/custom_object.py
-- Github.com: Kubernetes client: Python: Issues: 1995: Issue comment: 2
Example
An example of a Python code that would make a VolumeSnapshot is following:
from kubernetes import client, config
def main():
config.load_kube_config()
api = client.CustomObjectsApi()
# it's my custom resource defined as Dict
my_resource = {
"apiVersion": "snapshot.storage.k8s.io/v1beta1",
"kind": "VolumeSnapshot",
"metadata": {"name": "python-snapshot"},
"spec": {
"volumeSnapshotClassName": "example-snapshot-class",
"source": {"persistentVolumeClaimName": "example-pvc"}
}
}
# create the resource
api.create_namespaced_custom_object(
group="snapshot.storage.k8s.io",
version="v1beta1",
namespace="default",
plural="volumesnapshots",
body=my_resource,
)
if __name__ == "__main__":
main()
Please change the values inside of this code to support your particular setup (i.e. apiVersion, .metadata.name, .metadata.namespace, etc.).
A side note!
This Python code was tested with GKE and it's gce-pd-csi-driver.
After running this code the VolumeSnapshot should be created:
$ kubectl get volumesnapshots
NAME AGE
python-snapshot 19m
$ kubectl get volumesnapshotcontents
NAME AGE
snapcontent-71380980-6d91-45dc-ab13-4b9f42f7e7f2 19m
Additional resources:
Github.com: Kubernetes client: Python
Kubernetes.io: Docs: Concepts: Storage: Volume snapshots
I have a rather simple test app:
import redis
import os
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def test_redis(event, context):
redis_endpoint = None
if "REDIS" in os.environ:
redis_endpoint = os.environ["REDIS"]
log.debug("redis: " + redis_endpoint)
else:
log.debug("cannot read REDIS config environment variable")
return {
'statusCode': 500
}
redis_conn = None
try:
redis_conn = redis.StrictRedis(host=redis_endpoint, port=6379, db=0)
redis_conn.set("foo", "boo")
redis_conn.get("foo")
except:
log.debug("failed to connect to redis")
return {
'statusCode': 500
}
finally:
del redis_conn
return {
'statusCode': 200
}
which I have deployed as a HTTP endpoint with serverless
#
# For full config options, check the docs:
# docs.serverless.com
#
service: XXX
plugins:
- serverless-aws-documentation
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
provider:
name: aws
stage: dev
region: eu-central-1
runtime: python3.6
environment:
# our cache
REDIS: xx-xx-redis-001.xxx.euc1.cache.amazonaws.com
functions:
hello:
handler: hello/hello_world.say_hello
events:
- http:
path: hello
method: get
# private: true # <-- Requires clients to add API keys values in the `x-api-key` header of their request
# authorizer: # <-- An AWS API Gateway custom authorizer function
testRedis:
handler: test_redis/test_redis.test_redis
events:
- http:
path: test-redis
method: get
When I trigger the endpoint via API Gateway, the lambda just times out after about 7 seconds.
The environmental variable is read properly, no error message displayed.
I suppose there's a problem connecting to the redis, but the tutorial are quite explicit - not sure what the problem could be.
The problem might need the need to set up a NAT, not sure how to accomplish this task with serverless
I ran into this issue as well. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
Try having the lambda in the same VPC and security group as your elastic cluster